00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 91 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3269 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.093 Using shallow fetch with depth 1 00:00:00.093 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.093 > git --version # timeout=10 00:00:00.132 > git --version # 'git version 2.39.2' 00:00:00.132 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.628 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.639 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.649 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.649 > git config core.sparsecheckout # timeout=10 00:00:03.658 > git read-tree -mu HEAD # timeout=10 00:00:03.674 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.691 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.691 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.761 [Pipeline] Start of Pipeline 00:00:03.771 [Pipeline] library 00:00:03.773 Loading library shm_lib@master 00:00:03.773 Library shm_lib@master is cached. Copying from home. 00:00:03.787 [Pipeline] node 00:00:03.797 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.799 [Pipeline] { 00:00:03.808 [Pipeline] catchError 00:00:03.809 [Pipeline] { 00:00:03.823 [Pipeline] wrap 00:00:03.834 [Pipeline] { 00:00:03.844 [Pipeline] stage 00:00:03.846 [Pipeline] { (Prologue) 00:00:04.047 [Pipeline] sh 00:00:04.329 + logger -p user.info -t JENKINS-CI 00:00:04.347 [Pipeline] echo 00:00:04.348 Node: GP11 00:00:04.353 [Pipeline] sh 00:00:04.654 [Pipeline] setCustomBuildProperty 00:00:04.666 [Pipeline] echo 00:00:04.667 Cleanup processes 00:00:04.671 [Pipeline] sh 00:00:04.950 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.951 400528 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.965 [Pipeline] sh 00:00:05.251 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.251 ++ grep -v 'sudo pgrep' 00:00:05.251 ++ awk '{print $1}' 00:00:05.251 + sudo kill -9 00:00:05.251 + true 00:00:05.265 [Pipeline] cleanWs 00:00:05.276 [WS-CLEANUP] Deleting project workspace... 00:00:05.276 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.283 [WS-CLEANUP] done 00:00:05.286 [Pipeline] setCustomBuildProperty 00:00:05.299 [Pipeline] sh 00:00:05.583 + sudo git config --global --replace-all safe.directory '*' 00:00:05.654 [Pipeline] httpRequest 00:00:05.686 [Pipeline] echo 00:00:05.687 Sorcerer 10.211.164.101 is alive 00:00:05.695 [Pipeline] httpRequest 00:00:05.699 HttpMethod: GET 00:00:05.699 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.700 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.719 Response Code: HTTP/1.1 200 OK 00:00:05.719 Success: Status code 200 is in the accepted range: 200,404 00:00:05.719 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.396 [Pipeline] sh 00:00:11.678 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.694 [Pipeline] httpRequest 00:00:11.729 [Pipeline] echo 00:00:11.731 Sorcerer 10.211.164.101 is alive 00:00:11.739 [Pipeline] httpRequest 00:00:11.743 HttpMethod: GET 00:00:11.744 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.744 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.770 Response Code: HTTP/1.1 200 OK 00:00:11.771 Success: Status code 200 is in the accepted range: 200,404 00:00:11.771 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:32.969 [Pipeline] sh 00:01:33.260 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:36.563 [Pipeline] sh 00:01:36.856 + git -C spdk log --oneline -n5 00:01:36.856 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:36.856 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:36.856 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:36.856 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:36.856 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:36.877 [Pipeline] withCredentials 00:01:36.889 > git --version # timeout=10 00:01:36.901 > git --version # 'git version 2.39.2' 00:01:36.921 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:36.924 [Pipeline] { 00:01:36.936 [Pipeline] retry 00:01:36.938 [Pipeline] { 00:01:36.957 [Pipeline] sh 00:01:37.294 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:37.881 [Pipeline] } 00:01:37.905 [Pipeline] // retry 00:01:37.911 [Pipeline] } 00:01:37.932 [Pipeline] // withCredentials 00:01:37.941 [Pipeline] httpRequest 00:01:37.968 [Pipeline] echo 00:01:37.970 Sorcerer 10.211.164.101 is alive 00:01:37.979 [Pipeline] httpRequest 00:01:37.983 HttpMethod: GET 00:01:37.984 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:37.985 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:37.987 Response Code: HTTP/1.1 200 OK 00:01:37.988 Success: Status code 200 is in the accepted range: 200,404 00:01:37.988 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.901 [Pipeline] sh 00:01:42.183 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.098 [Pipeline] sh 00:01:44.383 + git -C dpdk log --oneline -n5 00:01:44.383 eeb0605f11 version: 23.11.0 00:01:44.383 238778122a doc: update release notes for 23.11 00:01:44.383 46aa6b3cfc doc: fix description of RSS features 00:01:44.383 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.383 7e421ae345 devtools: support skipping forbid rule check 00:01:44.394 [Pipeline] } 00:01:44.411 [Pipeline] // stage 00:01:44.420 [Pipeline] stage 00:01:44.422 [Pipeline] { (Prepare) 00:01:44.444 [Pipeline] writeFile 00:01:44.459 [Pipeline] sh 00:01:44.744 + logger -p user.info -t JENKINS-CI 00:01:44.758 [Pipeline] sh 00:01:45.041 + logger -p user.info -t JENKINS-CI 00:01:45.057 [Pipeline] sh 00:01:45.347 + cat autorun-spdk.conf 00:01:45.347 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.347 SPDK_TEST_NVMF=1 00:01:45.347 SPDK_TEST_NVME_CLI=1 00:01:45.347 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.347 SPDK_TEST_NVMF_NICS=e810 00:01:45.347 SPDK_TEST_VFIOUSER=1 00:01:45.347 SPDK_RUN_UBSAN=1 00:01:45.347 NET_TYPE=phy 00:01:45.347 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.347 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.355 RUN_NIGHTLY=1 00:01:45.360 [Pipeline] readFile 00:01:45.396 [Pipeline] withEnv 00:01:45.398 [Pipeline] { 00:01:45.416 [Pipeline] sh 00:01:45.703 + set -ex 00:01:45.703 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.703 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.703 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.703 ++ SPDK_TEST_NVMF=1 00:01:45.703 ++ SPDK_TEST_NVME_CLI=1 00:01:45.703 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.703 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.703 ++ SPDK_TEST_VFIOUSER=1 00:01:45.703 ++ SPDK_RUN_UBSAN=1 00:01:45.703 ++ NET_TYPE=phy 00:01:45.703 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.703 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.703 ++ RUN_NIGHTLY=1 00:01:45.703 + case $SPDK_TEST_NVMF_NICS in 00:01:45.703 + DRIVERS=ice 00:01:45.703 + [[ tcp == \r\d\m\a ]] 00:01:45.703 + [[ -n ice ]] 00:01:45.703 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.703 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:45.703 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:45.703 rmmod: ERROR: Module irdma is not currently loaded 00:01:45.703 rmmod: ERROR: Module i40iw is not currently loaded 00:01:45.703 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:45.703 + true 00:01:45.703 + for D in $DRIVERS 00:01:45.703 + sudo modprobe ice 00:01:45.703 + exit 0 00:01:45.714 [Pipeline] } 00:01:45.733 [Pipeline] // withEnv 00:01:45.740 [Pipeline] } 00:01:45.756 [Pipeline] // stage 00:01:45.766 [Pipeline] catchError 00:01:45.768 [Pipeline] { 00:01:45.784 [Pipeline] timeout 00:01:45.785 Timeout set to expire in 50 min 00:01:45.786 [Pipeline] { 00:01:45.803 [Pipeline] stage 00:01:45.805 [Pipeline] { (Tests) 00:01:45.822 [Pipeline] sh 00:01:46.140 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.140 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.140 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.140 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:46.140 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.140 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.140 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:46.140 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.140 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.140 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.140 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:46.140 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.140 + source /etc/os-release 00:01:46.140 ++ NAME='Fedora Linux' 00:01:46.140 ++ VERSION='38 (Cloud Edition)' 00:01:46.140 ++ ID=fedora 00:01:46.140 ++ VERSION_ID=38 00:01:46.140 ++ VERSION_CODENAME= 00:01:46.140 ++ PLATFORM_ID=platform:f38 00:01:46.140 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:46.140 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.140 ++ LOGO=fedora-logo-icon 00:01:46.140 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:46.140 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.140 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:46.140 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.140 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.140 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.140 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:46.140 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.140 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:46.140 ++ SUPPORT_END=2024-05-14 00:01:46.140 ++ VARIANT='Cloud Edition' 00:01:46.140 ++ VARIANT_ID=cloud 00:01:46.140 + uname -a 00:01:46.140 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:46.140 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:47.079 Hugepages 00:01:47.079 node hugesize free / total 00:01:47.079 node0 1048576kB 0 / 0 00:01:47.079 node0 2048kB 0 / 0 00:01:47.079 node1 1048576kB 0 / 0 00:01:47.079 node1 2048kB 0 / 0 00:01:47.079 00:01:47.079 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.079 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:47.079 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:47.079 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:47.079 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:47.079 + rm -f /tmp/spdk-ld-path 00:01:47.079 + source autorun-spdk.conf 00:01:47.079 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.079 ++ SPDK_TEST_NVMF=1 00:01:47.079 ++ SPDK_TEST_NVME_CLI=1 00:01:47.079 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.079 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.079 ++ SPDK_TEST_VFIOUSER=1 00:01:47.079 ++ SPDK_RUN_UBSAN=1 00:01:47.079 ++ NET_TYPE=phy 00:01:47.079 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.079 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.079 ++ RUN_NIGHTLY=1 00:01:47.079 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.079 + [[ -n '' ]] 00:01:47.079 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.079 + for M in /var/spdk/build-*-manifest.txt 00:01:47.079 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.079 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.079 + for M in /var/spdk/build-*-manifest.txt 00:01:47.079 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.079 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.079 ++ uname 00:01:47.079 + [[ Linux == \L\i\n\u\x ]] 00:01:47.079 + sudo dmesg -T 00:01:47.079 + sudo dmesg --clear 00:01:47.338 + dmesg_pid=401870 00:01:47.338 + [[ Fedora Linux == FreeBSD ]] 00:01:47.338 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.338 + sudo dmesg -Tw 00:01:47.338 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.338 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.338 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.338 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.338 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.338 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.338 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.338 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.338 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.338 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.338 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.338 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.338 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.338 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.338 Test configuration: 00:01:47.338 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.338 SPDK_TEST_NVMF=1 00:01:47.338 SPDK_TEST_NVME_CLI=1 00:01:47.338 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.338 SPDK_TEST_NVMF_NICS=e810 00:01:47.338 SPDK_TEST_VFIOUSER=1 00:01:47.338 SPDK_RUN_UBSAN=1 00:01:47.338 NET_TYPE=phy 00:01:47.338 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.338 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.338 RUN_NIGHTLY=1 06:30:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.338 06:30:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.338 06:30:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.338 06:30:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.338 06:30:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.338 06:30:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.338 06:30:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.338 06:30:34 -- paths/export.sh@5 -- $ export PATH 00:01:47.338 06:30:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.338 06:30:34 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.338 06:30:34 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:47.338 06:30:34 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721017834.XXXXXX 00:01:47.338 06:30:34 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721017834.SUv8KC 00:01:47.338 06:30:34 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:47.338 06:30:34 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:47.338 06:30:34 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.338 06:30:34 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:47.338 06:30:34 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.338 06:30:34 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.338 06:30:34 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:47.338 06:30:34 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:47.338 06:30:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.338 06:30:34 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:47.338 06:30:34 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:47.338 06:30:34 -- pm/common@17 -- $ local monitor 00:01:47.338 06:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.338 06:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.338 06:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.338 06:30:34 -- pm/common@21 -- $ date +%s 00:01:47.338 06:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.338 06:30:34 -- pm/common@21 -- $ date +%s 00:01:47.338 06:30:34 -- pm/common@25 -- $ sleep 1 00:01:47.338 06:30:34 -- pm/common@21 -- $ date +%s 00:01:47.338 06:30:34 -- pm/common@21 -- $ date +%s 00:01:47.339 06:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721017834 00:01:47.339 06:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721017834 00:01:47.339 06:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721017834 00:01:47.339 06:30:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721017834 00:01:47.339 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721017834_collect-vmstat.pm.log 00:01:47.339 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721017834_collect-cpu-load.pm.log 00:01:47.339 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721017834_collect-cpu-temp.pm.log 00:01:47.339 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721017834_collect-bmc-pm.bmc.pm.log 00:01:48.276 06:30:35 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:48.276 06:30:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.276 06:30:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.276 06:30:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.276 06:30:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.276 Mon Jul 15 04:30:35 AM UTC 2024 00:01:48.276 06:30:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.276 v24.05-13-g5fa2f5086 00:01:48.276 06:30:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.276 06:30:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.276 06:30:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.276 06:30:35 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:48.276 06:30:35 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:48.276 06:30:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.276 ************************************ 00:01:48.276 START TEST ubsan 00:01:48.276 ************************************ 00:01:48.276 06:30:35 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:48.276 using ubsan 00:01:48.276 00:01:48.276 real 0m0.000s 00:01:48.276 user 0m0.000s 00:01:48.276 sys 0m0.000s 00:01:48.276 06:30:35 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:48.276 06:30:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.276 ************************************ 00:01:48.276 END TEST ubsan 00:01:48.276 ************************************ 00:01:48.276 06:30:35 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:48.276 06:30:35 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:48.276 06:30:35 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:48.276 06:30:35 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:48.276 06:30:35 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:48.276 06:30:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.276 ************************************ 00:01:48.277 START TEST build_native_dpdk 00:01:48.277 ************************************ 00:01:48.277 06:30:35 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:48.277 eeb0605f11 version: 23.11.0 00:01:48.277 238778122a doc: update release notes for 23.11 00:01:48.277 46aa6b3cfc doc: fix description of RSS features 00:01:48.277 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:48.277 7e421ae345 devtools: support skipping forbid rule check 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:48.277 06:30:35 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:48.277 patching file config/rte_config.h 00:01:48.277 Hunk #1 succeeded at 60 (offset 1 line). 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:48.277 06:30:35 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:52.470 The Meson build system 00:01:52.470 Version: 1.3.1 00:01:52.470 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:52.470 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:52.470 Build type: native build 00:01:52.470 Program cat found: YES (/usr/bin/cat) 00:01:52.470 Project name: DPDK 00:01:52.470 Project version: 23.11.0 00:01:52.470 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.470 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:52.470 Host machine cpu family: x86_64 00:01:52.470 Host machine cpu: x86_64 00:01:52.470 Message: ## Building in Developer Mode ## 00:01:52.470 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.470 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:52.470 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.470 Program python3 found: YES (/usr/bin/python3) 00:01:52.470 Program cat found: YES (/usr/bin/cat) 00:01:52.470 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:52.470 Compiler for C supports arguments -march=native: YES 00:01:52.470 Checking for size of "void *" : 8 00:01:52.470 Checking for size of "void *" : 8 (cached) 00:01:52.470 Library m found: YES 00:01:52.470 Library numa found: YES 00:01:52.470 Has header "numaif.h" : YES 00:01:52.470 Library fdt found: NO 00:01:52.470 Library execinfo found: NO 00:01:52.470 Has header "execinfo.h" : YES 00:01:52.470 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.470 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.470 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.470 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.470 Run-time dependency openssl found: YES 3.0.9 00:01:52.470 Run-time dependency libpcap found: YES 1.10.4 00:01:52.470 Has header "pcap.h" with dependency libpcap: YES 00:01:52.470 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.470 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.470 Compiler for C supports arguments -Wformat: YES 00:01:52.470 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.470 Compiler for C supports arguments -Wformat-security: NO 00:01:52.470 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.470 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.470 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.470 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.470 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.470 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.470 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.470 Compiler for C supports arguments -Wundef: YES 00:01:52.470 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.470 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.470 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.470 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.470 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.470 Program objdump found: YES (/usr/bin/objdump) 00:01:52.470 Compiler for C supports arguments -mavx512f: YES 00:01:52.470 Checking if "AVX512 checking" compiles: YES 00:01:52.470 Fetching value of define "__SSE4_2__" : 1 00:01:52.470 Fetching value of define "__AES__" : 1 00:01:52.470 Fetching value of define "__AVX__" : 1 00:01:52.470 Fetching value of define "__AVX2__" : (undefined) 00:01:52.471 Fetching value of define "__AVX512BW__" : (undefined) 00:01:52.471 Fetching value of define "__AVX512CD__" : (undefined) 00:01:52.471 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:52.471 Fetching value of define "__AVX512F__" : (undefined) 00:01:52.471 Fetching value of define "__AVX512VL__" : (undefined) 00:01:52.471 Fetching value of define "__PCLMUL__" : 1 00:01:52.471 Fetching value of define "__RDRND__" : 1 00:01:52.471 Fetching value of define "__RDSEED__" : (undefined) 00:01:52.471 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.471 Fetching value of define "__znver1__" : (undefined) 00:01:52.471 Fetching value of define "__znver2__" : (undefined) 00:01:52.471 Fetching value of define "__znver3__" : (undefined) 00:01:52.471 Fetching value of define "__znver4__" : (undefined) 00:01:52.471 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.471 Message: lib/log: Defining dependency "log" 00:01:52.471 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.471 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.471 Checking for function "getentropy" : NO 00:01:52.471 Message: lib/eal: Defining dependency "eal" 00:01:52.471 Message: lib/ring: Defining dependency "ring" 00:01:52.471 Message: lib/rcu: Defining dependency "rcu" 00:01:52.471 Message: lib/mempool: Defining dependency "mempool" 00:01:52.471 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.471 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.471 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.471 Compiler for C supports arguments -mpclmul: YES 00:01:52.471 Compiler for C supports arguments -maes: YES 00:01:52.471 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.471 Compiler for C supports arguments -mavx512bw: YES 00:01:52.471 Compiler for C supports arguments -mavx512dq: YES 00:01:52.471 Compiler for C supports arguments -mavx512vl: YES 00:01:52.471 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.471 Compiler for C supports arguments -mavx2: YES 00:01:52.471 Compiler for C supports arguments -mavx: YES 00:01:52.471 Message: lib/net: Defining dependency "net" 00:01:52.471 Message: lib/meter: Defining dependency "meter" 00:01:52.471 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.471 Message: lib/pci: Defining dependency "pci" 00:01:52.471 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.471 Message: lib/metrics: Defining dependency "metrics" 00:01:52.471 Message: lib/hash: Defining dependency "hash" 00:01:52.471 Message: lib/timer: Defining dependency "timer" 00:01:52.471 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:52.471 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:52.471 Message: lib/acl: Defining dependency "acl" 00:01:52.471 Message: lib/bbdev: Defining dependency "bbdev" 00:01:52.471 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:52.471 Run-time dependency libelf found: YES 0.190 00:01:52.471 Message: lib/bpf: Defining dependency "bpf" 00:01:52.471 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:52.471 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.471 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.471 Message: lib/distributor: Defining dependency "distributor" 00:01:52.471 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.471 Message: lib/efd: Defining dependency "efd" 00:01:52.471 Message: lib/eventdev: Defining dependency "eventdev" 00:01:52.471 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:52.471 Message: lib/gpudev: Defining dependency "gpudev" 00:01:52.471 Message: lib/gro: Defining dependency "gro" 00:01:52.471 Message: lib/gso: Defining dependency "gso" 00:01:52.471 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:52.471 Message: lib/jobstats: Defining dependency "jobstats" 00:01:52.471 Message: lib/latencystats: Defining dependency "latencystats" 00:01:52.471 Message: lib/lpm: Defining dependency "lpm" 00:01:52.471 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:52.471 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:52.471 Message: lib/member: Defining dependency "member" 00:01:52.471 Message: lib/pcapng: Defining dependency "pcapng" 00:01:52.471 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.471 Message: lib/power: Defining dependency "power" 00:01:52.471 Message: lib/rawdev: Defining dependency "rawdev" 00:01:52.471 Message: lib/regexdev: Defining dependency "regexdev" 00:01:52.471 Message: lib/mldev: Defining dependency "mldev" 00:01:52.471 Message: lib/rib: Defining dependency "rib" 00:01:52.471 Message: lib/reorder: Defining dependency "reorder" 00:01:52.471 Message: lib/sched: Defining dependency "sched" 00:01:52.471 Message: lib/security: Defining dependency "security" 00:01:52.471 Message: lib/stack: Defining dependency "stack" 00:01:52.471 Has header "linux/userfaultfd.h" : YES 00:01:52.471 Has header "linux/vduse.h" : YES 00:01:52.471 Message: lib/vhost: Defining dependency "vhost" 00:01:52.471 Message: lib/ipsec: Defining dependency "ipsec" 00:01:52.471 Message: lib/pdcp: Defining dependency "pdcp" 00:01:52.471 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.471 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:52.471 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:52.471 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:52.471 Message: lib/fib: Defining dependency "fib" 00:01:52.471 Message: lib/port: Defining dependency "port" 00:01:52.471 Message: lib/pdump: Defining dependency "pdump" 00:01:52.471 Message: lib/table: Defining dependency "table" 00:01:52.471 Message: lib/pipeline: Defining dependency "pipeline" 00:01:52.471 Message: lib/graph: Defining dependency "graph" 00:01:52.471 Message: lib/node: Defining dependency "node" 00:01:53.880 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.880 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.880 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.880 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.880 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:53.880 Compiler for C supports arguments -Wno-unused-value: YES 00:01:53.880 Compiler for C supports arguments -Wno-format: YES 00:01:53.880 Compiler for C supports arguments -Wno-format-security: YES 00:01:53.880 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:53.880 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.880 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:53.880 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:53.880 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.880 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.880 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.880 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:53.880 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:53.880 Has header "sys/epoll.h" : YES 00:01:53.881 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.881 Configuring doxy-api-html.conf using configuration 00:01:53.881 Configuring doxy-api-man.conf using configuration 00:01:53.881 Program mandb found: YES (/usr/bin/mandb) 00:01:53.881 Program sphinx-build found: NO 00:01:53.881 Configuring rte_build_config.h using configuration 00:01:53.881 Message: 00:01:53.881 ================= 00:01:53.881 Applications Enabled 00:01:53.881 ================= 00:01:53.881 00:01:53.881 apps: 00:01:53.881 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:53.881 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:53.881 test-pmd, test-regex, test-sad, test-security-perf, 00:01:53.881 00:01:53.881 Message: 00:01:53.881 ================= 00:01:53.881 Libraries Enabled 00:01:53.881 ================= 00:01:53.881 00:01:53.881 libs: 00:01:53.881 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.881 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:53.881 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:53.881 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:53.881 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:53.881 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:53.881 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:53.881 00:01:53.881 00:01:53.881 Message: 00:01:53.881 =============== 00:01:53.881 Drivers Enabled 00:01:53.881 =============== 00:01:53.881 00:01:53.881 common: 00:01:53.881 00:01:53.881 bus: 00:01:53.881 pci, vdev, 00:01:53.881 mempool: 00:01:53.881 ring, 00:01:53.881 dma: 00:01:53.881 00:01:53.881 net: 00:01:53.881 i40e, 00:01:53.881 raw: 00:01:53.881 00:01:53.881 crypto: 00:01:53.881 00:01:53.881 compress: 00:01:53.881 00:01:53.881 regex: 00:01:53.881 00:01:53.881 ml: 00:01:53.881 00:01:53.881 vdpa: 00:01:53.881 00:01:53.881 event: 00:01:53.881 00:01:53.881 baseband: 00:01:53.881 00:01:53.881 gpu: 00:01:53.881 00:01:53.881 00:01:53.881 Message: 00:01:53.881 ================= 00:01:53.881 Content Skipped 00:01:53.881 ================= 00:01:53.881 00:01:53.881 apps: 00:01:53.881 00:01:53.881 libs: 00:01:53.881 00:01:53.881 drivers: 00:01:53.881 common/cpt: not in enabled drivers build config 00:01:53.881 common/dpaax: not in enabled drivers build config 00:01:53.881 common/iavf: not in enabled drivers build config 00:01:53.881 common/idpf: not in enabled drivers build config 00:01:53.881 common/mvep: not in enabled drivers build config 00:01:53.881 common/octeontx: not in enabled drivers build config 00:01:53.881 bus/auxiliary: not in enabled drivers build config 00:01:53.881 bus/cdx: not in enabled drivers build config 00:01:53.881 bus/dpaa: not in enabled drivers build config 00:01:53.881 bus/fslmc: not in enabled drivers build config 00:01:53.881 bus/ifpga: not in enabled drivers build config 00:01:53.881 bus/platform: not in enabled drivers build config 00:01:53.881 bus/vmbus: not in enabled drivers build config 00:01:53.881 common/cnxk: not in enabled drivers build config 00:01:53.881 common/mlx5: not in enabled drivers build config 00:01:53.881 common/nfp: not in enabled drivers build config 00:01:53.881 common/qat: not in enabled drivers build config 00:01:53.881 common/sfc_efx: not in enabled drivers build config 00:01:53.881 mempool/bucket: not in enabled drivers build config 00:01:53.881 mempool/cnxk: not in enabled drivers build config 00:01:53.881 mempool/dpaa: not in enabled drivers build config 00:01:53.881 mempool/dpaa2: not in enabled drivers build config 00:01:53.881 mempool/octeontx: not in enabled drivers build config 00:01:53.881 mempool/stack: not in enabled drivers build config 00:01:53.881 dma/cnxk: not in enabled drivers build config 00:01:53.881 dma/dpaa: not in enabled drivers build config 00:01:53.881 dma/dpaa2: not in enabled drivers build config 00:01:53.881 dma/hisilicon: not in enabled drivers build config 00:01:53.881 dma/idxd: not in enabled drivers build config 00:01:53.881 dma/ioat: not in enabled drivers build config 00:01:53.881 dma/skeleton: not in enabled drivers build config 00:01:53.881 net/af_packet: not in enabled drivers build config 00:01:53.881 net/af_xdp: not in enabled drivers build config 00:01:53.881 net/ark: not in enabled drivers build config 00:01:53.881 net/atlantic: not in enabled drivers build config 00:01:53.881 net/avp: not in enabled drivers build config 00:01:53.881 net/axgbe: not in enabled drivers build config 00:01:53.881 net/bnx2x: not in enabled drivers build config 00:01:53.881 net/bnxt: not in enabled drivers build config 00:01:53.881 net/bonding: not in enabled drivers build config 00:01:53.881 net/cnxk: not in enabled drivers build config 00:01:53.881 net/cpfl: not in enabled drivers build config 00:01:53.881 net/cxgbe: not in enabled drivers build config 00:01:53.881 net/dpaa: not in enabled drivers build config 00:01:53.881 net/dpaa2: not in enabled drivers build config 00:01:53.881 net/e1000: not in enabled drivers build config 00:01:53.881 net/ena: not in enabled drivers build config 00:01:53.881 net/enetc: not in enabled drivers build config 00:01:53.881 net/enetfec: not in enabled drivers build config 00:01:53.881 net/enic: not in enabled drivers build config 00:01:53.881 net/failsafe: not in enabled drivers build config 00:01:53.881 net/fm10k: not in enabled drivers build config 00:01:53.881 net/gve: not in enabled drivers build config 00:01:53.881 net/hinic: not in enabled drivers build config 00:01:53.881 net/hns3: not in enabled drivers build config 00:01:53.881 net/iavf: not in enabled drivers build config 00:01:53.881 net/ice: not in enabled drivers build config 00:01:53.881 net/idpf: not in enabled drivers build config 00:01:53.881 net/igc: not in enabled drivers build config 00:01:53.881 net/ionic: not in enabled drivers build config 00:01:53.881 net/ipn3ke: not in enabled drivers build config 00:01:53.881 net/ixgbe: not in enabled drivers build config 00:01:53.881 net/mana: not in enabled drivers build config 00:01:53.881 net/memif: not in enabled drivers build config 00:01:53.881 net/mlx4: not in enabled drivers build config 00:01:53.881 net/mlx5: not in enabled drivers build config 00:01:53.881 net/mvneta: not in enabled drivers build config 00:01:53.881 net/mvpp2: not in enabled drivers build config 00:01:53.881 net/netvsc: not in enabled drivers build config 00:01:53.881 net/nfb: not in enabled drivers build config 00:01:53.881 net/nfp: not in enabled drivers build config 00:01:53.881 net/ngbe: not in enabled drivers build config 00:01:53.881 net/null: not in enabled drivers build config 00:01:53.881 net/octeontx: not in enabled drivers build config 00:01:53.881 net/octeon_ep: not in enabled drivers build config 00:01:53.881 net/pcap: not in enabled drivers build config 00:01:53.881 net/pfe: not in enabled drivers build config 00:01:53.881 net/qede: not in enabled drivers build config 00:01:53.881 net/ring: not in enabled drivers build config 00:01:53.881 net/sfc: not in enabled drivers build config 00:01:53.881 net/softnic: not in enabled drivers build config 00:01:53.881 net/tap: not in enabled drivers build config 00:01:53.881 net/thunderx: not in enabled drivers build config 00:01:53.881 net/txgbe: not in enabled drivers build config 00:01:53.881 net/vdev_netvsc: not in enabled drivers build config 00:01:53.881 net/vhost: not in enabled drivers build config 00:01:53.881 net/virtio: not in enabled drivers build config 00:01:53.881 net/vmxnet3: not in enabled drivers build config 00:01:53.881 raw/cnxk_bphy: not in enabled drivers build config 00:01:53.881 raw/cnxk_gpio: not in enabled drivers build config 00:01:53.881 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:53.882 raw/ifpga: not in enabled drivers build config 00:01:53.882 raw/ntb: not in enabled drivers build config 00:01:53.882 raw/skeleton: not in enabled drivers build config 00:01:53.882 crypto/armv8: not in enabled drivers build config 00:01:53.882 crypto/bcmfs: not in enabled drivers build config 00:01:53.882 crypto/caam_jr: not in enabled drivers build config 00:01:53.882 crypto/ccp: not in enabled drivers build config 00:01:53.882 crypto/cnxk: not in enabled drivers build config 00:01:53.882 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.882 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.882 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.882 crypto/mlx5: not in enabled drivers build config 00:01:53.882 crypto/mvsam: not in enabled drivers build config 00:01:53.882 crypto/nitrox: not in enabled drivers build config 00:01:53.882 crypto/null: not in enabled drivers build config 00:01:53.882 crypto/octeontx: not in enabled drivers build config 00:01:53.882 crypto/openssl: not in enabled drivers build config 00:01:53.882 crypto/scheduler: not in enabled drivers build config 00:01:53.882 crypto/uadk: not in enabled drivers build config 00:01:53.882 crypto/virtio: not in enabled drivers build config 00:01:53.882 compress/isal: not in enabled drivers build config 00:01:53.882 compress/mlx5: not in enabled drivers build config 00:01:53.882 compress/octeontx: not in enabled drivers build config 00:01:53.882 compress/zlib: not in enabled drivers build config 00:01:53.882 regex/mlx5: not in enabled drivers build config 00:01:53.882 regex/cn9k: not in enabled drivers build config 00:01:53.882 ml/cnxk: not in enabled drivers build config 00:01:53.882 vdpa/ifc: not in enabled drivers build config 00:01:53.882 vdpa/mlx5: not in enabled drivers build config 00:01:53.882 vdpa/nfp: not in enabled drivers build config 00:01:53.882 vdpa/sfc: not in enabled drivers build config 00:01:53.882 event/cnxk: not in enabled drivers build config 00:01:53.882 event/dlb2: not in enabled drivers build config 00:01:53.882 event/dpaa: not in enabled drivers build config 00:01:53.882 event/dpaa2: not in enabled drivers build config 00:01:53.882 event/dsw: not in enabled drivers build config 00:01:53.882 event/opdl: not in enabled drivers build config 00:01:53.882 event/skeleton: not in enabled drivers build config 00:01:53.882 event/sw: not in enabled drivers build config 00:01:53.882 event/octeontx: not in enabled drivers build config 00:01:53.882 baseband/acc: not in enabled drivers build config 00:01:53.882 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:53.882 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:53.882 baseband/la12xx: not in enabled drivers build config 00:01:53.882 baseband/null: not in enabled drivers build config 00:01:53.882 baseband/turbo_sw: not in enabled drivers build config 00:01:53.882 gpu/cuda: not in enabled drivers build config 00:01:53.882 00:01:53.882 00:01:53.882 Build targets in project: 220 00:01:53.882 00:01:53.882 DPDK 23.11.0 00:01:53.882 00:01:53.882 User defined options 00:01:53.882 libdir : lib 00:01:53.882 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.882 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:53.882 c_link_args : 00:01:53.882 enable_docs : false 00:01:53.882 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:53.882 enable_kmods : false 00:01:53.882 machine : native 00:01:53.882 tests : false 00:01:53.882 00:01:53.882 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.882 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:53.882 06:30:41 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:53.882 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:53.882 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.882 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.882 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.882 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.882 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.882 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.882 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.882 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.882 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.882 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.882 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.882 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.882 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.882 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.882 [15/710] Linking static target lib/librte_kvargs.a 00:01:54.145 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.145 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.145 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.145 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.145 [20/710] Linking static target lib/librte_log.a 00:01:54.145 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.405 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.981 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.981 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.981 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.981 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.981 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.981 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.981 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.981 [30/710] Linking target lib/librte_log.so.24.0 00:01:54.981 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.981 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.981 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.981 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.981 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.981 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.981 [37/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.981 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.981 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.981 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.981 [41/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.981 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.981 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.981 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.981 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.981 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.981 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:54.981 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.981 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.247 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.247 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.247 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.247 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.247 [54/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:55.247 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.247 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.247 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.247 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.247 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.247 [60/710] Linking target lib/librte_kvargs.so.24.0 00:01:55.247 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.247 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.247 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.247 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.512 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:55.512 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.512 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.512 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.773 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.773 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.773 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.773 [72/710] Linking static target lib/librte_pci.a 00:01:55.773 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.773 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.773 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.037 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.037 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.037 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.037 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.037 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.037 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.037 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.037 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.037 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.037 [85/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.037 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.037 [87/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.037 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.037 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.037 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.299 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.299 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.299 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.299 [94/710] Linking static target lib/librte_ring.a 00:01:56.299 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.299 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.299 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.299 [98/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.299 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.299 [100/710] Linking static target lib/librte_meter.a 00:01:56.299 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.299 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.299 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.562 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.562 [105/710] Linking static target lib/librte_telemetry.a 00:01:56.562 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.562 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.562 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.562 [109/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.562 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.562 [111/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.562 [112/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.562 [113/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.562 [114/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.562 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.562 [116/710] Linking static target lib/librte_eal.a 00:01:56.828 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.828 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.828 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.828 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.829 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.829 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.829 [123/710] Linking static target lib/librte_net.a 00:01:56.829 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.829 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.089 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.089 [127/710] Linking static target lib/librte_mempool.a 00:01:57.089 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.089 [129/710] Linking static target lib/librte_cmdline.a 00:01:57.089 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.089 [131/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.358 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:57.358 [133/710] Linking target lib/librte_telemetry.so.24.0 00:01:57.358 [134/710] Linking static target lib/librte_cfgfile.a 00:01:57.358 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.358 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.358 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:57.358 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:57.358 [139/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.358 [140/710] Linking static target lib/librte_metrics.a 00:01:57.358 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.358 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:57.358 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:57.618 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.618 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:57.618 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:57.618 [147/710] Linking static target lib/librte_bitratestats.a 00:01:57.618 [148/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.618 [149/710] Linking static target lib/librte_rcu.a 00:01:57.618 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:57.618 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:57.618 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.618 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:57.881 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.881 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:57.881 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.881 [157/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.881 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:57.881 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:57.881 [160/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.881 [161/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.148 [162/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.148 [163/710] Linking static target lib/librte_timer.a 00:01:58.148 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.148 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.148 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:58.148 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.148 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.148 [169/710] Linking static target lib/librte_bbdev.a 00:01:58.410 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.410 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.410 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:58.410 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.410 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.410 [175/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.410 [176/710] Linking static target lib/librte_compressdev.a 00:01:58.410 [177/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.672 [178/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.673 [179/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:58.673 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:58.932 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:58.932 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:58.932 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:58.932 [184/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.932 [185/710] Linking static target lib/librte_distributor.a 00:01:58.932 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:59.198 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.198 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.198 [189/710] Linking static target lib/librte_bpf.a 00:01:59.198 [190/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.198 [191/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.198 [192/710] Linking static target lib/librte_dmadev.a 00:01:59.460 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:59.460 [194/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:59.460 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:59.460 [196/710] Linking static target lib/librte_dispatcher.a 00:01:59.460 [197/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:59.460 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.460 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:59.460 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:59.460 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:59.724 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:59.724 [203/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:59.724 [204/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:59.724 [205/710] Linking static target lib/librte_gpudev.a 00:01:59.724 [206/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:59.724 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.724 [208/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:59.724 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.724 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:59.724 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:59.724 [212/710] Linking static target lib/librte_gro.a 00:01:59.724 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.724 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.724 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:59.724 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:59.724 [217/710] Linking static target lib/librte_jobstats.a 00:01:59.989 [218/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:59.989 [219/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.989 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:00.255 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.255 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.255 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:00.255 [224/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.255 [225/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:00.255 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:00.514 [227/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:00.514 [228/710] Linking static target lib/librte_latencystats.a 00:02:00.514 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:00.514 [230/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:00.514 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:00.514 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:00.514 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:00.514 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:00.783 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:00.784 [236/710] Linking static target lib/librte_ip_frag.a 00:02:00.784 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:00.784 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:00.784 [239/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.784 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.784 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.046 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:01.046 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.046 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:01.046 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.046 [246/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:01.046 [247/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.306 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:01.306 [249/710] Linking static target lib/librte_gso.a 00:02:01.306 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.306 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:01.306 [252/710] Linking static target lib/librte_regexdev.a 00:02:01.306 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:01.306 [254/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:01.306 [255/710] Linking static target lib/librte_rawdev.a 00:02:01.306 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:01.571 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:01.571 [258/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.571 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:01.571 [260/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.571 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:01.571 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:01.571 [263/710] Linking static target lib/librte_mldev.a 00:02:01.571 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:01.571 [265/710] Linking static target lib/librte_efd.a 00:02:01.571 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:01.571 [267/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:01.835 [268/710] Linking static target lib/librte_pcapng.a 00:02:01.835 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:01.835 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:01.835 [271/710] Linking static target lib/librte_stack.a 00:02:01.835 [272/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.835 [273/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:01.835 [274/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:01.835 [275/710] Linking static target lib/acl/libavx2_tmp.a 00:02:02.094 [276/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:02.094 [277/710] Linking static target lib/librte_lpm.a 00:02:02.094 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.094 [279/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [280/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [281/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:02.094 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.094 [283/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.094 [284/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.094 [286/710] Linking static target lib/librte_hash.a 00:02:02.375 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.375 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.375 [289/710] Linking static target lib/librte_reorder.a 00:02:02.375 [290/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.375 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.375 [292/710] Linking static target lib/librte_power.a 00:02:02.375 [293/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:02.375 [294/710] Linking static target lib/acl/libavx512_tmp.a 00:02:02.375 [295/710] Linking static target lib/librte_acl.a 00:02:02.375 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.375 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.691 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.691 [299/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.691 [300/710] Linking static target lib/librte_security.a 00:02:02.691 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:02.691 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.691 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.691 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.954 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.954 [306/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:02.954 [307/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:02.954 [308/710] Linking static target lib/librte_rib.a 00:02:02.954 [309/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.954 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:02.954 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:02.954 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:02.954 [313/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:02.954 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:02.954 [315/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.220 [316/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.220 [317/710] Linking static target lib/librte_mbuf.a 00:02:03.220 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.220 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:03.220 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:03.220 [321/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:03.220 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:03.220 [323/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.478 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:03.478 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:03.478 [326/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:03.478 [327/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:03.478 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.741 [329/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.741 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.741 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:03.999 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:03.999 [333/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.999 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:03.999 [335/710] Linking static target lib/librte_eventdev.a 00:02:03.999 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:03.999 [337/710] Linking static target lib/librte_member.a 00:02:04.261 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:04.261 [339/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.261 [340/710] Linking static target lib/librte_cryptodev.a 00:02:04.261 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.261 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:04.519 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:04.519 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:04.519 [345/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:04.519 [346/710] Linking static target lib/librte_sched.a 00:02:04.519 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:04.519 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:04.519 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:04.519 [350/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.519 [351/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:04.519 [352/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:04.519 [353/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.519 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:04.519 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:04.519 [356/710] Linking static target lib/librte_ethdev.a 00:02:04.778 [357/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:04.778 [358/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:04.778 [359/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:04.778 [360/710] Linking static target lib/librte_fib.a 00:02:04.778 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:04.778 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:05.044 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:05.044 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:05.044 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:05.044 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.044 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:05.044 [368/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.304 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:05.304 [370/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:05.304 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.304 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:05.304 [373/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.563 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:05.563 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:05.563 [376/710] Linking static target lib/librte_pdump.a 00:02:05.563 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:05.824 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:05.824 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:05.824 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:05.824 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.824 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:05.824 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:05.824 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:05.824 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:05.824 [386/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:05.824 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.824 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:06.094 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:06.094 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.094 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:06.094 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:06.354 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:06.355 [394/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:06.355 [395/710] Linking static target lib/librte_ipsec.a 00:02:06.355 [396/710] Linking static target lib/librte_table.a 00:02:06.355 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.355 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:06.613 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:06.613 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:06.613 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:06.878 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.878 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:06.878 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:06.878 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:07.145 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:07.145 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:07.145 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.145 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.145 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.145 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.145 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.145 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:07.408 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.408 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.408 [416/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:07.408 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:07.408 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.668 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.668 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.668 [421/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.668 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.668 [423/710] Linking static target drivers/librte_bus_vdev.a 00:02:07.668 [424/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:07.668 [425/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.668 [426/710] Linking target lib/librte_eal.so.24.0 00:02:07.930 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:07.930 [428/710] Linking static target lib/librte_port.a 00:02:07.930 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:07.930 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.930 [431/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:07.930 [432/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.930 [433/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.930 [434/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:08.193 [435/710] Linking target lib/librte_ring.so.24.0 00:02:08.193 [436/710] Linking target lib/librte_meter.so.24.0 00:02:08.193 [437/710] Linking target lib/librte_pci.so.24.0 00:02:08.193 [438/710] Linking target lib/librte_timer.so.24.0 00:02:08.193 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:08.193 [440/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:08.193 [441/710] Linking target lib/librte_acl.so.24.0 00:02:08.193 [442/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.193 [443/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.193 [444/710] Linking target lib/librte_cfgfile.so.24.0 00:02:08.460 [445/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:08.460 [446/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:08.460 [447/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.460 [448/710] Linking target lib/librte_rcu.so.24.0 00:02:08.460 [449/710] Linking target lib/librte_dmadev.so.24.0 00:02:08.460 [450/710] Linking target lib/librte_mempool.so.24.0 00:02:08.460 [451/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:08.460 [452/710] Linking static target lib/librte_graph.a 00:02:08.460 [453/710] Linking target lib/librte_jobstats.so.24.0 00:02:08.461 [454/710] Linking target lib/librte_rawdev.so.24.0 00:02:08.461 [455/710] Linking target lib/librte_stack.so.24.0 00:02:08.461 [456/710] Linking static target drivers/librte_bus_pci.a 00:02:08.461 [457/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.461 [458/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.461 [459/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.461 [460/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:08.461 [461/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:08.461 [462/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:08.733 [463/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.733 [464/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:08.733 [465/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:08.733 [466/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:08.733 [467/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:08.733 [468/710] Linking target lib/librte_rib.so.24.0 00:02:08.733 [469/710] Linking target lib/librte_mbuf.so.24.0 00:02:08.733 [470/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.733 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:08.733 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:08.733 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.999 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.999 [475/710] Linking static target drivers/librte_mempool_ring.a 00:02:08.999 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:08.999 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.999 [478/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:08.999 [479/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:08.999 [480/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:08.999 [481/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:08.999 [482/710] Linking target lib/librte_fib.so.24.0 00:02:08.999 [483/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:08.999 [484/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:08.999 [485/710] Linking target lib/librte_net.so.24.0 00:02:08.999 [486/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:08.999 [487/710] Linking target lib/librte_bbdev.so.24.0 00:02:08.999 [488/710] Linking target lib/librte_compressdev.so.24.0 00:02:08.999 [489/710] Linking target lib/librte_distributor.so.24.0 00:02:08.999 [490/710] Linking target lib/librte_cryptodev.so.24.0 00:02:08.999 [491/710] Linking target lib/librte_gpudev.so.24.0 00:02:09.265 [492/710] Linking target lib/librte_regexdev.so.24.0 00:02:09.265 [493/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:09.265 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:09.265 [495/710] Linking target lib/librte_mldev.so.24.0 00:02:09.265 [496/710] Linking target lib/librte_reorder.so.24.0 00:02:09.265 [497/710] Linking target lib/librte_sched.so.24.0 00:02:09.265 [498/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:09.265 [499/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.265 [500/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:09.265 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:09.265 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:09.265 [503/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.265 [504/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.265 [505/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.265 [506/710] Linking target lib/librte_cmdline.so.24.0 00:02:09.532 [507/710] Linking target lib/librte_hash.so.24.0 00:02:09.532 [508/710] Linking target lib/librte_security.so.24.0 00:02:09.532 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:09.532 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:09.532 [511/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:09.532 [512/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.532 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:09.792 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:09.792 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:09.792 [516/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:09.792 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:09.792 [518/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:09.792 [519/710] Linking target lib/librte_efd.so.24.0 00:02:09.792 [520/710] Linking target lib/librte_lpm.so.24.0 00:02:09.792 [521/710] Linking target lib/librte_ipsec.so.24.0 00:02:09.792 [522/710] Linking target lib/librte_member.so.24.0 00:02:09.792 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:10.052 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:10.052 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:10.053 [526/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:10.053 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:10.312 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:10.312 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:10.312 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:10.312 [531/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:10.583 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:10.583 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:10.583 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:10.583 [535/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:10.583 [536/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:10.583 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:10.846 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:10.846 [539/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:10.846 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:10.846 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:11.107 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:11.107 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.107 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.372 [545/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.372 [546/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:11.372 [547/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:11.372 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.372 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:11.372 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.372 [551/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:11.636 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:11.636 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:11.636 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:11.636 [555/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:11.636 [556/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:11.897 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.897 [558/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:11.897 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:12.163 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:12.423 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:12.686 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:12.686 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:12.686 [564/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:12.686 [565/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.686 [566/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:12.686 [567/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:12.686 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:12.953 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:12.954 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:12.954 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:12.954 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:12.954 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:12.954 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:13.218 [575/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:13.218 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:13.218 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:13.218 [578/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:13.218 [579/710] Linking target lib/librte_metrics.so.24.0 00:02:13.218 [580/710] Linking target lib/librte_bpf.so.24.0 00:02:13.218 [581/710] Linking target lib/librte_gro.so.24.0 00:02:13.218 [582/710] Linking target lib/librte_eventdev.so.24.0 00:02:13.218 [583/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:13.218 [584/710] Linking target lib/librte_gso.so.24.0 00:02:13.218 [585/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:13.486 [586/710] Linking target lib/librte_ip_frag.so.24.0 00:02:13.486 [587/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:13.486 [588/710] Linking target lib/librte_pcapng.so.24.0 00:02:13.486 [589/710] Linking target lib/librte_power.so.24.0 00:02:13.486 [590/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:13.486 [591/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:13.486 [592/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:13.486 [593/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:13.486 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:02:13.486 [595/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:13.486 [596/710] Linking target lib/librte_latencystats.so.24.0 00:02:13.486 [597/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:13.486 [598/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:13.486 [599/710] Linking static target lib/librte_pdcp.a 00:02:13.486 [600/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:13.486 [601/710] Linking target lib/librte_dispatcher.so.24.0 00:02:13.749 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:13.749 [603/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:13.749 [604/710] Linking target lib/librte_port.so.24.0 00:02:13.749 [605/710] Linking target lib/librte_pdump.so.24.0 00:02:13.749 [606/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:13.749 [607/710] Linking target lib/librte_graph.so.24.0 00:02:13.749 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:13.749 [609/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:14.012 [610/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:14.012 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:14.012 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:14.012 [613/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:14.012 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:14.012 [615/710] Linking target lib/librte_table.so.24.0 00:02:14.012 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:14.276 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:14.276 [618/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.276 [619/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:14.276 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:14.276 [621/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:14.276 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:14.276 [623/710] Linking target lib/librte_pdcp.so.24.0 00:02:14.276 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:14.276 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:14.539 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:14.539 [627/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:14.539 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:14.836 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:14.836 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:15.108 [631/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:15.108 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:15.108 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:15.108 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:15.108 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:15.108 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:15.367 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:15.367 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:15.367 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:15.367 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:15.367 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:15.627 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:15.627 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:15.627 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:15.627 [645/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:15.885 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:15.885 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:15.885 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:15.885 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:15.885 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:16.142 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:16.400 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:16.400 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:16.400 [654/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:16.400 [655/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:16.400 [656/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:16.400 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:16.400 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:16.966 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:16.966 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.966 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.966 [662/710] Linking static target drivers/librte_net_i40e.a 00:02:16.966 [663/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:17.224 [664/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:17.224 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:17.224 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:17.224 [667/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:17.482 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.482 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:17.482 [670/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:18.049 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:18.307 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:18.307 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:18.307 [674/710] Linking static target lib/librte_node.a 00:02:18.565 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.565 [676/710] Linking target lib/librte_node.so.24.0 00:02:19.499 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:19.758 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:20.016 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:21.390 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:21.979 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:27.242 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.327 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.327 [684/710] Linking static target lib/librte_vhost.a 00:02:59.327 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.327 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:21.252 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:21.252 [688/710] Linking static target lib/librte_pipeline.a 00:03:21.252 [689/710] Linking target app/dpdk-dumpcap 00:03:21.252 [690/710] Linking target app/dpdk-test-acl 00:03:21.252 [691/710] Linking target app/dpdk-proc-info 00:03:21.252 [692/710] Linking target app/dpdk-test-cmdline 00:03:21.252 [693/710] Linking target app/dpdk-test-dma-perf 00:03:21.252 [694/710] Linking target app/dpdk-pdump 00:03:21.252 [695/710] Linking target app/dpdk-test-regex 00:03:21.252 [696/710] Linking target app/dpdk-test-fib 00:03:21.252 [697/710] Linking target app/dpdk-test-sad 00:03:21.252 [698/710] Linking target app/dpdk-test-gpudev 00:03:21.252 [699/710] Linking target app/dpdk-test-crypto-perf 00:03:21.252 [700/710] Linking target app/dpdk-test-mldev 00:03:21.252 [701/710] Linking target app/dpdk-graph 00:03:21.252 [702/710] Linking target app/dpdk-test-flow-perf 00:03:21.252 [703/710] Linking target app/dpdk-test-security-perf 00:03:21.252 [704/710] Linking target app/dpdk-test-bbdev 00:03:21.252 [705/710] Linking target app/dpdk-test-pipeline 00:03:21.252 [706/710] Linking target app/dpdk-test-eventdev 00:03:21.252 [707/710] Linking target app/dpdk-test-compress-perf 00:03:21.252 [708/710] Linking target app/dpdk-testpmd 00:03:21.818 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.818 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:21.818 06:32:09 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:21.818 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:22.076 [0/1] Installing files. 00:03:22.339 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.343 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:22.344 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:22.344 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.344 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.345 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.915 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.915 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.916 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.916 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.916 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:22.916 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.916 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:22.920 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:22.920 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:22.920 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:22.920 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:22.920 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:22.920 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:22.920 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:22.920 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:22.920 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:22.920 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:22.920 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:22.920 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:22.920 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:22.920 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:22.920 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:22.920 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:22.920 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:22.920 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:22.920 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:22.920 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:22.920 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:22.920 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:22.920 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:22.920 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:22.920 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:22.920 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:22.920 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:22.920 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:22.920 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:22.920 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:22.920 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:22.920 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:22.920 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:22.920 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:22.920 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:22.920 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:22.920 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:22.920 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:22.920 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:22.920 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:22.920 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:22.920 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:22.920 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:22.920 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:22.920 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:22.920 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:22.920 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:22.920 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:22.920 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:22.920 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:22.920 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:22.920 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:22.920 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:22.920 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:22.920 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:22.920 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:22.920 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:22.920 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:22.920 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:22.920 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:22.920 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:22.920 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:22.920 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:22.920 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:22.920 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:22.920 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:22.920 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:22.920 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:22.920 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:22.920 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:22.920 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:22.920 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:22.920 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:22.920 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:22.921 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:22.921 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:22.921 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:22.921 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:22.921 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:22.921 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:22.921 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:22.921 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:22.921 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:22.921 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:22.921 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:22.921 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:22.921 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:22.921 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:22.921 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:22.921 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:22.921 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:22.921 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:22.921 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:22.921 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:22.921 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:22.921 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:22.921 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:22.921 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:22.921 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:22.921 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:22.921 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:22.921 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:22.921 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:22.921 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:22.921 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:22.921 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:22.921 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:22.921 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:22.921 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:22.921 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:22.921 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:22.921 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:22.921 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:22.921 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:22.921 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:22.921 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:22.921 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:22.921 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:22.921 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:22.921 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:22.921 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:22.921 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:22.921 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:22.921 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:22.921 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:22.921 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:22.921 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:22.921 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:22.921 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:22.921 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:22.921 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:22.921 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:22.921 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:23.180 06:32:10 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:23.180 06:32:10 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:23.180 06:32:10 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:23.180 06:32:10 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.180 00:03:23.180 real 1m34.690s 00:03:23.180 user 18m4.734s 00:03:23.180 sys 2m6.252s 00:03:23.180 06:32:10 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:23.180 06:32:10 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:23.180 ************************************ 00:03:23.180 END TEST build_native_dpdk 00:03:23.180 ************************************ 00:03:23.180 06:32:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:23.180 06:32:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:23.180 06:32:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:23.180 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:23.180 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.180 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:23.180 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:23.438 Using 'verbs' RDMA provider 00:03:34.358 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:42.485 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:42.485 Creating mk/config.mk...done. 00:03:42.485 Creating mk/cc.flags.mk...done. 00:03:42.485 Type 'make' to build. 00:03:42.485 06:32:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:42.485 06:32:30 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:42.485 06:32:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:42.485 06:32:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.745 ************************************ 00:03:42.745 START TEST make 00:03:42.745 ************************************ 00:03:42.745 06:32:30 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:43.013 make[1]: Nothing to be done for 'all'. 00:03:44.397 The Meson build system 00:03:44.397 Version: 1.3.1 00:03:44.397 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:44.397 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:44.397 Build type: native build 00:03:44.397 Project name: libvfio-user 00:03:44.397 Project version: 0.0.1 00:03:44.397 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:44.397 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:44.397 Host machine cpu family: x86_64 00:03:44.397 Host machine cpu: x86_64 00:03:44.397 Run-time dependency threads found: YES 00:03:44.397 Library dl found: YES 00:03:44.397 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:44.397 Run-time dependency json-c found: YES 0.17 00:03:44.397 Run-time dependency cmocka found: YES 1.1.7 00:03:44.397 Program pytest-3 found: NO 00:03:44.397 Program flake8 found: NO 00:03:44.397 Program misspell-fixer found: NO 00:03:44.397 Program restructuredtext-lint found: NO 00:03:44.397 Program valgrind found: YES (/usr/bin/valgrind) 00:03:44.397 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:44.397 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:44.397 Compiler for C supports arguments -Wwrite-strings: YES 00:03:44.397 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:44.397 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:44.397 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:44.397 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:44.397 Build targets in project: 8 00:03:44.397 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:44.397 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:44.397 00:03:44.397 libvfio-user 0.0.1 00:03:44.397 00:03:44.397 User defined options 00:03:44.397 buildtype : debug 00:03:44.397 default_library: shared 00:03:44.397 libdir : /usr/local/lib 00:03:44.397 00:03:44.397 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:45.350 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:45.350 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:45.350 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:45.350 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:45.350 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:45.350 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:45.350 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:45.350 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:45.350 [8/37] Compiling C object samples/null.p/null.c.o 00:03:45.350 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:45.350 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:45.350 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:45.350 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:45.350 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:45.350 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:45.350 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:45.612 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:45.612 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:45.612 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:45.612 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:45.612 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:45.612 [21/37] Compiling C object samples/server.p/server.c.o 00:03:45.612 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:45.612 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:45.612 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:45.612 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:45.612 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:45.612 [27/37] Compiling C object samples/client.p/client.c.o 00:03:45.612 [28/37] Linking target samples/client 00:03:45.612 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:45.874 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:45.874 [31/37] Linking target test/unit_tests 00:03:45.874 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:45.874 [33/37] Linking target samples/null 00:03:45.874 [34/37] Linking target samples/server 00:03:45.874 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:45.874 [36/37] Linking target samples/lspci 00:03:45.874 [37/37] Linking target samples/gpio-pci-idio-16 00:03:45.874 INFO: autodetecting backend as ninja 00:03:45.874 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:46.137 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:46.708 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:46.708 ninja: no work to do. 00:03:58.908 CC lib/ut/ut.o 00:03:58.908 CC lib/ut_mock/mock.o 00:03:58.908 CC lib/log/log.o 00:03:58.908 CC lib/log/log_flags.o 00:03:58.908 CC lib/log/log_deprecated.o 00:03:58.908 LIB libspdk_log.a 00:03:58.908 LIB libspdk_ut.a 00:03:58.908 LIB libspdk_ut_mock.a 00:03:58.908 SO libspdk_ut.so.2.0 00:03:58.908 SO libspdk_ut_mock.so.6.0 00:03:58.908 SO libspdk_log.so.7.0 00:03:58.908 SYMLINK libspdk_ut.so 00:03:58.908 SYMLINK libspdk_ut_mock.so 00:03:58.908 SYMLINK libspdk_log.so 00:03:58.908 CXX lib/trace_parser/trace.o 00:03:58.908 CC lib/dma/dma.o 00:03:58.908 CC lib/ioat/ioat.o 00:03:58.908 CC lib/util/base64.o 00:03:58.908 CC lib/util/bit_array.o 00:03:58.908 CC lib/util/cpuset.o 00:03:58.908 CC lib/util/crc16.o 00:03:58.908 CC lib/util/crc32.o 00:03:58.908 CC lib/util/crc32c.o 00:03:58.908 CC lib/util/crc32_ieee.o 00:03:58.908 CC lib/util/crc64.o 00:03:58.908 CC lib/util/dif.o 00:03:58.908 CC lib/util/fd.o 00:03:58.908 CC lib/util/file.o 00:03:58.908 CC lib/util/hexlify.o 00:03:58.908 CC lib/util/iov.o 00:03:58.908 CC lib/util/math.o 00:03:58.908 CC lib/util/pipe.o 00:03:58.908 CC lib/util/strerror_tls.o 00:03:58.908 CC lib/util/string.o 00:03:58.908 CC lib/util/uuid.o 00:03:58.908 CC lib/util/fd_group.o 00:03:58.908 CC lib/util/xor.o 00:03:58.908 CC lib/util/zipf.o 00:03:58.908 CC lib/vfio_user/host/vfio_user_pci.o 00:03:58.908 CC lib/vfio_user/host/vfio_user.o 00:03:58.908 LIB libspdk_dma.a 00:03:58.908 SO libspdk_dma.so.4.0 00:03:58.908 SYMLINK libspdk_dma.so 00:03:58.908 LIB libspdk_ioat.a 00:03:58.908 SO libspdk_ioat.so.7.0 00:03:58.908 LIB libspdk_vfio_user.a 00:03:58.908 SYMLINK libspdk_ioat.so 00:03:58.908 SO libspdk_vfio_user.so.5.0 00:03:58.908 SYMLINK libspdk_vfio_user.so 00:03:58.908 LIB libspdk_util.a 00:03:58.908 SO libspdk_util.so.9.0 00:03:59.167 SYMLINK libspdk_util.so 00:03:59.167 CC lib/idxd/idxd.o 00:03:59.167 CC lib/json/json_parse.o 00:03:59.167 CC lib/vmd/vmd.o 00:03:59.167 CC lib/rdma/common.o 00:03:59.167 CC lib/idxd/idxd_user.o 00:03:59.167 CC lib/env_dpdk/env.o 00:03:59.167 CC lib/json/json_util.o 00:03:59.167 CC lib/conf/conf.o 00:03:59.167 CC lib/rdma/rdma_verbs.o 00:03:59.167 CC lib/idxd/idxd_kernel.o 00:03:59.167 CC lib/env_dpdk/memory.o 00:03:59.167 CC lib/vmd/led.o 00:03:59.167 CC lib/env_dpdk/pci.o 00:03:59.167 CC lib/json/json_write.o 00:03:59.167 CC lib/env_dpdk/init.o 00:03:59.167 CC lib/env_dpdk/threads.o 00:03:59.167 CC lib/env_dpdk/pci_ioat.o 00:03:59.167 CC lib/env_dpdk/pci_virtio.o 00:03:59.167 CC lib/env_dpdk/pci_vmd.o 00:03:59.167 CC lib/env_dpdk/pci_idxd.o 00:03:59.167 CC lib/env_dpdk/pci_event.o 00:03:59.167 CC lib/env_dpdk/sigbus_handler.o 00:03:59.167 CC lib/env_dpdk/pci_dpdk.o 00:03:59.167 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:59.167 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:59.167 LIB libspdk_trace_parser.a 00:03:59.425 SO libspdk_trace_parser.so.5.0 00:03:59.425 SYMLINK libspdk_trace_parser.so 00:03:59.683 LIB libspdk_json.a 00:03:59.683 LIB libspdk_rdma.a 00:03:59.683 LIB libspdk_conf.a 00:03:59.683 SO libspdk_rdma.so.6.0 00:03:59.683 SO libspdk_json.so.6.0 00:03:59.683 SO libspdk_conf.so.6.0 00:03:59.683 SYMLINK libspdk_rdma.so 00:03:59.683 SYMLINK libspdk_json.so 00:03:59.683 SYMLINK libspdk_conf.so 00:03:59.683 LIB libspdk_idxd.a 00:03:59.683 CC lib/jsonrpc/jsonrpc_server.o 00:03:59.683 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:59.683 CC lib/jsonrpc/jsonrpc_client.o 00:03:59.683 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:59.941 SO libspdk_idxd.so.12.0 00:03:59.941 SYMLINK libspdk_idxd.so 00:03:59.941 LIB libspdk_vmd.a 00:03:59.941 SO libspdk_vmd.so.6.0 00:03:59.941 SYMLINK libspdk_vmd.so 00:04:00.198 LIB libspdk_jsonrpc.a 00:04:00.199 SO libspdk_jsonrpc.so.6.0 00:04:00.199 SYMLINK libspdk_jsonrpc.so 00:04:00.456 CC lib/rpc/rpc.o 00:04:00.456 LIB libspdk_rpc.a 00:04:00.714 SO libspdk_rpc.so.6.0 00:04:00.714 SYMLINK libspdk_rpc.so 00:04:00.714 CC lib/trace/trace.o 00:04:00.714 CC lib/keyring/keyring.o 00:04:00.714 CC lib/trace/trace_flags.o 00:04:00.714 CC lib/notify/notify.o 00:04:00.714 CC lib/keyring/keyring_rpc.o 00:04:00.714 CC lib/trace/trace_rpc.o 00:04:00.714 CC lib/notify/notify_rpc.o 00:04:00.972 LIB libspdk_notify.a 00:04:00.972 SO libspdk_notify.so.6.0 00:04:00.972 LIB libspdk_keyring.a 00:04:00.972 SYMLINK libspdk_notify.so 00:04:00.972 LIB libspdk_trace.a 00:04:00.972 SO libspdk_keyring.so.1.0 00:04:01.229 SO libspdk_trace.so.10.0 00:04:01.229 SYMLINK libspdk_keyring.so 00:04:01.229 SYMLINK libspdk_trace.so 00:04:01.229 CC lib/thread/thread.o 00:04:01.229 CC lib/thread/iobuf.o 00:04:01.229 CC lib/sock/sock.o 00:04:01.229 CC lib/sock/sock_rpc.o 00:04:01.229 LIB libspdk_env_dpdk.a 00:04:01.486 SO libspdk_env_dpdk.so.14.0 00:04:01.486 SYMLINK libspdk_env_dpdk.so 00:04:01.745 LIB libspdk_sock.a 00:04:01.745 SO libspdk_sock.so.9.0 00:04:01.745 SYMLINK libspdk_sock.so 00:04:02.003 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.003 CC lib/nvme/nvme_ctrlr.o 00:04:02.003 CC lib/nvme/nvme_fabric.o 00:04:02.003 CC lib/nvme/nvme_ns_cmd.o 00:04:02.003 CC lib/nvme/nvme_ns.o 00:04:02.003 CC lib/nvme/nvme_pcie_common.o 00:04:02.003 CC lib/nvme/nvme_pcie.o 00:04:02.003 CC lib/nvme/nvme_qpair.o 00:04:02.003 CC lib/nvme/nvme.o 00:04:02.003 CC lib/nvme/nvme_quirks.o 00:04:02.003 CC lib/nvme/nvme_transport.o 00:04:02.003 CC lib/nvme/nvme_discovery.o 00:04:02.003 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.003 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:02.003 CC lib/nvme/nvme_tcp.o 00:04:02.003 CC lib/nvme/nvme_opal.o 00:04:02.003 CC lib/nvme/nvme_io_msg.o 00:04:02.003 CC lib/nvme/nvme_poll_group.o 00:04:02.003 CC lib/nvme/nvme_zns.o 00:04:02.003 CC lib/nvme/nvme_stubs.o 00:04:02.003 CC lib/nvme/nvme_auth.o 00:04:02.003 CC lib/nvme/nvme_cuse.o 00:04:02.003 CC lib/nvme/nvme_vfio_user.o 00:04:02.003 CC lib/nvme/nvme_rdma.o 00:04:02.938 LIB libspdk_thread.a 00:04:02.938 SO libspdk_thread.so.10.0 00:04:02.938 SYMLINK libspdk_thread.so 00:04:03.197 CC lib/init/json_config.o 00:04:03.197 CC lib/vfu_tgt/tgt_endpoint.o 00:04:03.197 CC lib/virtio/virtio.o 00:04:03.197 CC lib/blob/blobstore.o 00:04:03.197 CC lib/vfu_tgt/tgt_rpc.o 00:04:03.197 CC lib/virtio/virtio_vhost_user.o 00:04:03.197 CC lib/accel/accel.o 00:04:03.197 CC lib/init/subsystem.o 00:04:03.197 CC lib/accel/accel_rpc.o 00:04:03.197 CC lib/blob/request.o 00:04:03.197 CC lib/virtio/virtio_vfio_user.o 00:04:03.197 CC lib/accel/accel_sw.o 00:04:03.197 CC lib/init/subsystem_rpc.o 00:04:03.197 CC lib/blob/zeroes.o 00:04:03.197 CC lib/virtio/virtio_pci.o 00:04:03.197 CC lib/init/rpc.o 00:04:03.197 CC lib/blob/blob_bs_dev.o 00:04:03.509 LIB libspdk_init.a 00:04:03.509 SO libspdk_init.so.5.0 00:04:03.509 LIB libspdk_virtio.a 00:04:03.509 LIB libspdk_vfu_tgt.a 00:04:03.509 SYMLINK libspdk_init.so 00:04:03.509 SO libspdk_virtio.so.7.0 00:04:03.509 SO libspdk_vfu_tgt.so.3.0 00:04:03.509 SYMLINK libspdk_virtio.so 00:04:03.509 SYMLINK libspdk_vfu_tgt.so 00:04:03.768 CC lib/event/app.o 00:04:03.768 CC lib/event/reactor.o 00:04:03.768 CC lib/event/log_rpc.o 00:04:03.768 CC lib/event/app_rpc.o 00:04:03.768 CC lib/event/scheduler_static.o 00:04:04.025 LIB libspdk_event.a 00:04:04.026 SO libspdk_event.so.13.0 00:04:04.283 SYMLINK libspdk_event.so 00:04:04.283 LIB libspdk_accel.a 00:04:04.283 SO libspdk_accel.so.15.0 00:04:04.283 SYMLINK libspdk_accel.so 00:04:04.283 LIB libspdk_nvme.a 00:04:04.541 SO libspdk_nvme.so.13.0 00:04:04.541 CC lib/bdev/bdev.o 00:04:04.541 CC lib/bdev/bdev_rpc.o 00:04:04.541 CC lib/bdev/bdev_zone.o 00:04:04.541 CC lib/bdev/part.o 00:04:04.541 CC lib/bdev/scsi_nvme.o 00:04:04.799 SYMLINK libspdk_nvme.so 00:04:06.174 LIB libspdk_blob.a 00:04:06.174 SO libspdk_blob.so.11.0 00:04:06.432 SYMLINK libspdk_blob.so 00:04:06.432 CC lib/lvol/lvol.o 00:04:06.432 CC lib/blobfs/blobfs.o 00:04:06.432 CC lib/blobfs/tree.o 00:04:06.998 LIB libspdk_bdev.a 00:04:06.998 SO libspdk_bdev.so.15.0 00:04:06.998 SYMLINK libspdk_bdev.so 00:04:07.266 LIB libspdk_blobfs.a 00:04:07.266 CC lib/scsi/dev.o 00:04:07.266 CC lib/nbd/nbd.o 00:04:07.266 CC lib/scsi/lun.o 00:04:07.266 CC lib/ublk/ublk_rpc.o 00:04:07.266 CC lib/ftl/ftl_core.o 00:04:07.266 CC lib/ublk/ublk.o 00:04:07.266 CC lib/nvmf/ctrlr.o 00:04:07.266 CC lib/nvmf/ctrlr_discovery.o 00:04:07.266 CC lib/scsi/port.o 00:04:07.266 SO libspdk_blobfs.so.10.0 00:04:07.266 CC lib/nbd/nbd_rpc.o 00:04:07.266 CC lib/ftl/ftl_init.o 00:04:07.266 CC lib/nvmf/ctrlr_bdev.o 00:04:07.266 CC lib/scsi/scsi.o 00:04:07.266 CC lib/nvmf/subsystem.o 00:04:07.266 CC lib/ftl/ftl_layout.o 00:04:07.266 CC lib/scsi/scsi_bdev.o 00:04:07.266 CC lib/ftl/ftl_debug.o 00:04:07.266 CC lib/nvmf/nvmf.o 00:04:07.266 CC lib/scsi/scsi_pr.o 00:04:07.266 CC lib/ftl/ftl_io.o 00:04:07.266 CC lib/nvmf/nvmf_rpc.o 00:04:07.266 CC lib/scsi/scsi_rpc.o 00:04:07.266 CC lib/scsi/task.o 00:04:07.266 CC lib/nvmf/tcp.o 00:04:07.266 CC lib/ftl/ftl_sb.o 00:04:07.266 CC lib/nvmf/transport.o 00:04:07.266 CC lib/ftl/ftl_l2p.o 00:04:07.266 CC lib/nvmf/stubs.o 00:04:07.266 CC lib/ftl/ftl_l2p_flat.o 00:04:07.266 CC lib/nvmf/mdns_server.o 00:04:07.266 CC lib/ftl/ftl_nv_cache.o 00:04:07.266 CC lib/nvmf/vfio_user.o 00:04:07.266 CC lib/ftl/ftl_band.o 00:04:07.266 CC lib/nvmf/rdma.o 00:04:07.266 CC lib/ftl/ftl_band_ops.o 00:04:07.266 CC lib/nvmf/auth.o 00:04:07.266 CC lib/ftl/ftl_writer.o 00:04:07.266 CC lib/ftl/ftl_rq.o 00:04:07.266 CC lib/ftl/ftl_reloc.o 00:04:07.266 CC lib/ftl/ftl_l2p_cache.o 00:04:07.266 CC lib/ftl/ftl_p2l.o 00:04:07.266 CC lib/ftl/mngt/ftl_mngt.o 00:04:07.266 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:07.266 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:07.266 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:07.266 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:07.528 SYMLINK libspdk_blobfs.so 00:04:07.528 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:07.528 LIB libspdk_lvol.a 00:04:07.528 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.789 SO libspdk_lvol.so.10.0 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.789 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.789 CC lib/ftl/utils/ftl_conf.o 00:04:07.789 CC lib/ftl/utils/ftl_md.o 00:04:07.789 CC lib/ftl/utils/ftl_mempool.o 00:04:07.789 SYMLINK libspdk_lvol.so 00:04:07.789 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.789 CC lib/ftl/utils/ftl_property.o 00:04:07.789 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.789 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.789 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.789 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.789 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.789 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:07.789 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.050 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.050 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.050 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.050 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.050 CC lib/ftl/base/ftl_base_dev.o 00:04:08.050 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.050 CC lib/ftl/ftl_trace.o 00:04:08.050 LIB libspdk_nbd.a 00:04:08.050 SO libspdk_nbd.so.7.0 00:04:08.308 SYMLINK libspdk_nbd.so 00:04:08.308 LIB libspdk_scsi.a 00:04:08.308 SO libspdk_scsi.so.9.0 00:04:08.308 LIB libspdk_ublk.a 00:04:08.308 SO libspdk_ublk.so.3.0 00:04:08.308 SYMLINK libspdk_scsi.so 00:04:08.565 SYMLINK libspdk_ublk.so 00:04:08.565 CC lib/iscsi/conn.o 00:04:08.565 CC lib/vhost/vhost.o 00:04:08.565 CC lib/vhost/vhost_rpc.o 00:04:08.565 CC lib/iscsi/init_grp.o 00:04:08.565 CC lib/vhost/vhost_scsi.o 00:04:08.565 CC lib/iscsi/iscsi.o 00:04:08.565 CC lib/vhost/vhost_blk.o 00:04:08.565 CC lib/iscsi/md5.o 00:04:08.565 CC lib/vhost/rte_vhost_user.o 00:04:08.565 CC lib/iscsi/param.o 00:04:08.565 CC lib/iscsi/portal_grp.o 00:04:08.565 CC lib/iscsi/tgt_node.o 00:04:08.565 CC lib/iscsi/iscsi_subsystem.o 00:04:08.565 CC lib/iscsi/iscsi_rpc.o 00:04:08.565 CC lib/iscsi/task.o 00:04:08.822 LIB libspdk_ftl.a 00:04:08.822 SO libspdk_ftl.so.9.0 00:04:09.386 SYMLINK libspdk_ftl.so 00:04:09.643 LIB libspdk_vhost.a 00:04:09.901 SO libspdk_vhost.so.8.0 00:04:09.901 LIB libspdk_nvmf.a 00:04:09.901 SYMLINK libspdk_vhost.so 00:04:09.901 SO libspdk_nvmf.so.18.0 00:04:09.901 LIB libspdk_iscsi.a 00:04:10.159 SO libspdk_iscsi.so.8.0 00:04:10.159 SYMLINK libspdk_nvmf.so 00:04:10.159 SYMLINK libspdk_iscsi.so 00:04:10.417 CC module/env_dpdk/env_dpdk_rpc.o 00:04:10.417 CC module/vfu_device/vfu_virtio.o 00:04:10.417 CC module/vfu_device/vfu_virtio_blk.o 00:04:10.417 CC module/vfu_device/vfu_virtio_scsi.o 00:04:10.417 CC module/vfu_device/vfu_virtio_rpc.o 00:04:10.675 CC module/blob/bdev/blob_bdev.o 00:04:10.675 CC module/accel/error/accel_error.o 00:04:10.675 CC module/accel/error/accel_error_rpc.o 00:04:10.675 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.675 CC module/accel/ioat/accel_ioat.o 00:04:10.675 CC module/keyring/linux/keyring.o 00:04:10.675 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:10.675 CC module/keyring/linux/keyring_rpc.o 00:04:10.675 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.675 CC module/accel/iaa/accel_iaa.o 00:04:10.675 CC module/accel/iaa/accel_iaa_rpc.o 00:04:10.675 CC module/sock/posix/posix.o 00:04:10.675 CC module/keyring/file/keyring.o 00:04:10.675 CC module/keyring/file/keyring_rpc.o 00:04:10.675 CC module/accel/dsa/accel_dsa.o 00:04:10.675 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.675 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.675 LIB libspdk_env_dpdk_rpc.a 00:04:10.675 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.675 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.675 LIB libspdk_keyring_linux.a 00:04:10.675 LIB libspdk_keyring_file.a 00:04:10.675 LIB libspdk_scheduler_gscheduler.a 00:04:10.675 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.675 SO libspdk_scheduler_gscheduler.so.4.0 00:04:10.675 SO libspdk_keyring_linux.so.1.0 00:04:10.675 SO libspdk_keyring_file.so.1.0 00:04:10.675 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.675 LIB libspdk_accel_error.a 00:04:10.675 LIB libspdk_accel_ioat.a 00:04:10.675 LIB libspdk_scheduler_dynamic.a 00:04:10.933 SO libspdk_accel_error.so.2.0 00:04:10.933 LIB libspdk_accel_iaa.a 00:04:10.933 SO libspdk_accel_ioat.so.6.0 00:04:10.933 SO libspdk_scheduler_dynamic.so.4.0 00:04:10.933 SYMLINK libspdk_scheduler_gscheduler.so 00:04:10.933 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:10.933 SYMLINK libspdk_keyring_linux.so 00:04:10.933 SYMLINK libspdk_keyring_file.so 00:04:10.933 SO libspdk_accel_iaa.so.3.0 00:04:10.933 LIB libspdk_accel_dsa.a 00:04:10.933 SYMLINK libspdk_accel_error.so 00:04:10.933 SYMLINK libspdk_accel_ioat.so 00:04:10.933 LIB libspdk_blob_bdev.a 00:04:10.933 SYMLINK libspdk_scheduler_dynamic.so 00:04:10.934 SO libspdk_accel_dsa.so.5.0 00:04:10.934 SO libspdk_blob_bdev.so.11.0 00:04:10.934 SYMLINK libspdk_accel_iaa.so 00:04:10.934 SYMLINK libspdk_blob_bdev.so 00:04:10.934 SYMLINK libspdk_accel_dsa.so 00:04:11.192 LIB libspdk_vfu_device.a 00:04:11.192 SO libspdk_vfu_device.so.3.0 00:04:11.192 CC module/bdev/lvol/vbdev_lvol.o 00:04:11.192 CC module/bdev/gpt/gpt.o 00:04:11.192 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:11.192 CC module/bdev/null/bdev_null.o 00:04:11.192 CC module/bdev/error/vbdev_error.o 00:04:11.192 CC module/bdev/null/bdev_null_rpc.o 00:04:11.192 CC module/bdev/gpt/vbdev_gpt.o 00:04:11.192 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:11.192 CC module/bdev/error/vbdev_error_rpc.o 00:04:11.192 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:11.192 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.192 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:11.192 CC module/bdev/raid/bdev_raid.o 00:04:11.192 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.192 CC module/bdev/split/vbdev_split.o 00:04:11.192 CC module/bdev/malloc/bdev_malloc.o 00:04:11.192 CC module/bdev/raid/bdev_raid_rpc.o 00:04:11.192 CC module/bdev/nvme/bdev_nvme.o 00:04:11.192 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.192 CC module/bdev/delay/vbdev_delay.o 00:04:11.192 CC module/bdev/iscsi/bdev_iscsi.o 00:04:11.192 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:11.192 CC module/bdev/ftl/bdev_ftl.o 00:04:11.192 CC module/bdev/raid/bdev_raid_sb.o 00:04:11.192 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:11.192 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:11.192 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.192 CC module/blobfs/bdev/blobfs_bdev.o 00:04:11.192 CC module/bdev/nvme/nvme_rpc.o 00:04:11.192 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:11.192 CC module/bdev/raid/raid0.o 00:04:11.192 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:11.192 CC module/bdev/aio/bdev_aio.o 00:04:11.192 CC module/bdev/raid/raid1.o 00:04:11.192 CC module/bdev/nvme/bdev_mdns_client.o 00:04:11.192 CC module/bdev/aio/bdev_aio_rpc.o 00:04:11.192 CC module/bdev/raid/concat.o 00:04:11.192 CC module/bdev/nvme/vbdev_opal.o 00:04:11.192 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:11.192 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.192 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:11.192 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:11.192 SYMLINK libspdk_vfu_device.so 00:04:11.451 LIB libspdk_sock_posix.a 00:04:11.451 SO libspdk_sock_posix.so.6.0 00:04:11.709 LIB libspdk_blobfs_bdev.a 00:04:11.709 SO libspdk_blobfs_bdev.so.6.0 00:04:11.709 SYMLINK libspdk_sock_posix.so 00:04:11.709 LIB libspdk_bdev_split.a 00:04:11.709 SYMLINK libspdk_blobfs_bdev.so 00:04:11.709 LIB libspdk_bdev_null.a 00:04:11.709 SO libspdk_bdev_split.so.6.0 00:04:11.709 LIB libspdk_bdev_gpt.a 00:04:11.709 LIB libspdk_bdev_delay.a 00:04:11.709 LIB libspdk_bdev_error.a 00:04:11.709 SO libspdk_bdev_null.so.6.0 00:04:11.709 SO libspdk_bdev_gpt.so.6.0 00:04:11.709 SO libspdk_bdev_delay.so.6.0 00:04:11.709 SO libspdk_bdev_error.so.6.0 00:04:11.709 SYMLINK libspdk_bdev_split.so 00:04:11.709 LIB libspdk_bdev_zone_block.a 00:04:11.709 LIB libspdk_bdev_ftl.a 00:04:11.709 LIB libspdk_bdev_aio.a 00:04:11.709 LIB libspdk_bdev_passthru.a 00:04:11.709 LIB libspdk_bdev_malloc.a 00:04:11.709 SYMLINK libspdk_bdev_null.so 00:04:11.709 SO libspdk_bdev_zone_block.so.6.0 00:04:11.709 LIB libspdk_bdev_iscsi.a 00:04:11.709 SO libspdk_bdev_ftl.so.6.0 00:04:11.709 SYMLINK libspdk_bdev_gpt.so 00:04:11.709 SYMLINK libspdk_bdev_delay.so 00:04:11.709 SO libspdk_bdev_aio.so.6.0 00:04:11.709 SO libspdk_bdev_passthru.so.6.0 00:04:11.709 SYMLINK libspdk_bdev_error.so 00:04:11.709 SO libspdk_bdev_malloc.so.6.0 00:04:11.709 SO libspdk_bdev_iscsi.so.6.0 00:04:11.967 SYMLINK libspdk_bdev_zone_block.so 00:04:11.967 SYMLINK libspdk_bdev_ftl.so 00:04:11.967 SYMLINK libspdk_bdev_passthru.so 00:04:11.967 SYMLINK libspdk_bdev_aio.so 00:04:11.967 SYMLINK libspdk_bdev_malloc.so 00:04:11.967 SYMLINK libspdk_bdev_iscsi.so 00:04:11.967 LIB libspdk_bdev_virtio.a 00:04:11.967 SO libspdk_bdev_virtio.so.6.0 00:04:11.967 LIB libspdk_bdev_lvol.a 00:04:11.967 SO libspdk_bdev_lvol.so.6.0 00:04:11.967 SYMLINK libspdk_bdev_virtio.so 00:04:11.967 SYMLINK libspdk_bdev_lvol.so 00:04:12.533 LIB libspdk_bdev_raid.a 00:04:12.533 SO libspdk_bdev_raid.so.6.0 00:04:12.533 SYMLINK libspdk_bdev_raid.so 00:04:13.469 LIB libspdk_bdev_nvme.a 00:04:13.727 SO libspdk_bdev_nvme.so.7.0 00:04:13.727 SYMLINK libspdk_bdev_nvme.so 00:04:13.986 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:13.986 CC module/event/subsystems/keyring/keyring.o 00:04:13.986 CC module/event/subsystems/iobuf/iobuf.o 00:04:13.986 CC module/event/subsystems/sock/sock.o 00:04:13.986 CC module/event/subsystems/vmd/vmd.o 00:04:13.986 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:13.986 CC module/event/subsystems/scheduler/scheduler.o 00:04:13.986 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:13.986 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.244 LIB libspdk_event_keyring.a 00:04:14.244 LIB libspdk_event_vhost_blk.a 00:04:14.244 LIB libspdk_event_sock.a 00:04:14.244 LIB libspdk_event_vfu_tgt.a 00:04:14.244 LIB libspdk_event_scheduler.a 00:04:14.244 LIB libspdk_event_vmd.a 00:04:14.244 LIB libspdk_event_iobuf.a 00:04:14.244 SO libspdk_event_keyring.so.1.0 00:04:14.244 SO libspdk_event_sock.so.5.0 00:04:14.244 SO libspdk_event_vhost_blk.so.3.0 00:04:14.244 SO libspdk_event_vfu_tgt.so.3.0 00:04:14.244 SO libspdk_event_scheduler.so.4.0 00:04:14.244 SO libspdk_event_vmd.so.6.0 00:04:14.244 SO libspdk_event_iobuf.so.3.0 00:04:14.244 SYMLINK libspdk_event_keyring.so 00:04:14.244 SYMLINK libspdk_event_vhost_blk.so 00:04:14.244 SYMLINK libspdk_event_sock.so 00:04:14.244 SYMLINK libspdk_event_vfu_tgt.so 00:04:14.244 SYMLINK libspdk_event_scheduler.so 00:04:14.244 SYMLINK libspdk_event_vmd.so 00:04:14.244 SYMLINK libspdk_event_iobuf.so 00:04:14.502 CC module/event/subsystems/accel/accel.o 00:04:14.762 LIB libspdk_event_accel.a 00:04:14.762 SO libspdk_event_accel.so.6.0 00:04:14.762 SYMLINK libspdk_event_accel.so 00:04:15.021 CC module/event/subsystems/bdev/bdev.o 00:04:15.021 LIB libspdk_event_bdev.a 00:04:15.021 SO libspdk_event_bdev.so.6.0 00:04:15.280 SYMLINK libspdk_event_bdev.so 00:04:15.280 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:15.280 CC module/event/subsystems/scsi/scsi.o 00:04:15.280 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:15.280 CC module/event/subsystems/ublk/ublk.o 00:04:15.280 CC module/event/subsystems/nbd/nbd.o 00:04:15.538 LIB libspdk_event_ublk.a 00:04:15.538 LIB libspdk_event_nbd.a 00:04:15.538 LIB libspdk_event_scsi.a 00:04:15.538 SO libspdk_event_ublk.so.3.0 00:04:15.538 SO libspdk_event_nbd.so.6.0 00:04:15.538 SO libspdk_event_scsi.so.6.0 00:04:15.538 SYMLINK libspdk_event_ublk.so 00:04:15.538 SYMLINK libspdk_event_nbd.so 00:04:15.538 LIB libspdk_event_nvmf.a 00:04:15.538 SYMLINK libspdk_event_scsi.so 00:04:15.538 SO libspdk_event_nvmf.so.6.0 00:04:15.796 SYMLINK libspdk_event_nvmf.so 00:04:15.796 CC module/event/subsystems/iscsi/iscsi.o 00:04:15.796 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:15.796 LIB libspdk_event_vhost_scsi.a 00:04:15.796 LIB libspdk_event_iscsi.a 00:04:16.053 SO libspdk_event_vhost_scsi.so.3.0 00:04:16.053 SO libspdk_event_iscsi.so.6.0 00:04:16.053 SYMLINK libspdk_event_vhost_scsi.so 00:04:16.053 SYMLINK libspdk_event_iscsi.so 00:04:16.053 SO libspdk.so.6.0 00:04:16.053 SYMLINK libspdk.so 00:04:16.316 CXX app/trace/trace.o 00:04:16.316 CC app/spdk_top/spdk_top.o 00:04:16.316 CC app/trace_record/trace_record.o 00:04:16.316 CC app/spdk_nvme_perf/perf.o 00:04:16.316 TEST_HEADER include/spdk/accel.h 00:04:16.316 CC app/spdk_nvme_identify/identify.o 00:04:16.316 CC test/rpc_client/rpc_client_test.o 00:04:16.316 CC app/spdk_lspci/spdk_lspci.o 00:04:16.316 TEST_HEADER include/spdk/accel_module.h 00:04:16.316 CC app/spdk_nvme_discover/discovery_aer.o 00:04:16.316 TEST_HEADER include/spdk/assert.h 00:04:16.316 TEST_HEADER include/spdk/barrier.h 00:04:16.316 TEST_HEADER include/spdk/base64.h 00:04:16.316 TEST_HEADER include/spdk/bdev.h 00:04:16.316 TEST_HEADER include/spdk/bdev_module.h 00:04:16.316 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.316 TEST_HEADER include/spdk/bit_array.h 00:04:16.316 TEST_HEADER include/spdk/bit_pool.h 00:04:16.316 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.316 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.316 TEST_HEADER include/spdk/blobfs.h 00:04:16.316 TEST_HEADER include/spdk/blob.h 00:04:16.316 TEST_HEADER include/spdk/conf.h 00:04:16.316 TEST_HEADER include/spdk/config.h 00:04:16.316 TEST_HEADER include/spdk/cpuset.h 00:04:16.316 TEST_HEADER include/spdk/crc16.h 00:04:16.316 TEST_HEADER include/spdk/crc32.h 00:04:16.316 TEST_HEADER include/spdk/crc64.h 00:04:16.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.316 TEST_HEADER include/spdk/dif.h 00:04:16.316 TEST_HEADER include/spdk/dma.h 00:04:16.316 CC app/spdk_dd/spdk_dd.o 00:04:16.316 TEST_HEADER include/spdk/endian.h 00:04:16.316 CC app/nvmf_tgt/nvmf_main.o 00:04:16.316 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.316 TEST_HEADER include/spdk/env.h 00:04:16.316 CC app/iscsi_tgt/iscsi_tgt.o 00:04:16.316 TEST_HEADER include/spdk/event.h 00:04:16.316 TEST_HEADER include/spdk/fd_group.h 00:04:16.316 TEST_HEADER include/spdk/fd.h 00:04:16.316 CC app/vhost/vhost.o 00:04:16.316 TEST_HEADER include/spdk/file.h 00:04:16.614 TEST_HEADER include/spdk/ftl.h 00:04:16.614 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.615 TEST_HEADER include/spdk/hexlify.h 00:04:16.615 TEST_HEADER include/spdk/histogram_data.h 00:04:16.615 TEST_HEADER include/spdk/idxd.h 00:04:16.615 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.615 TEST_HEADER include/spdk/init.h 00:04:16.615 TEST_HEADER include/spdk/ioat.h 00:04:16.615 CC app/spdk_tgt/spdk_tgt.o 00:04:16.615 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.615 CC test/env/memory/memory_ut.o 00:04:16.615 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.615 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.615 CC test/app/histogram_perf/histogram_perf.o 00:04:16.615 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.615 TEST_HEADER include/spdk/json.h 00:04:16.615 CC examples/nvme/hello_world/hello_world.o 00:04:16.615 CC test/env/pci/pci_ut.o 00:04:16.615 CC examples/util/zipf/zipf.o 00:04:16.615 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.615 CC test/env/vtophys/vtophys.o 00:04:16.615 TEST_HEADER include/spdk/keyring.h 00:04:16.615 CC test/app/stub/stub.o 00:04:16.615 CC examples/accel/perf/accel_perf.o 00:04:16.615 CC examples/ioat/verify/verify.o 00:04:16.615 CC examples/sock/hello_world/hello_sock.o 00:04:16.615 TEST_HEADER include/spdk/keyring_module.h 00:04:16.615 CC test/event/event_perf/event_perf.o 00:04:16.615 TEST_HEADER include/spdk/likely.h 00:04:16.615 CC test/app/jsoncat/jsoncat.o 00:04:16.615 CC examples/ioat/perf/perf.o 00:04:16.615 TEST_HEADER include/spdk/log.h 00:04:16.615 TEST_HEADER include/spdk/lvol.h 00:04:16.615 CC examples/idxd/perf/perf.o 00:04:16.615 CC app/fio/nvme/fio_plugin.o 00:04:16.615 TEST_HEADER include/spdk/memory.h 00:04:16.615 TEST_HEADER include/spdk/mmio.h 00:04:16.615 CC test/thread/poller_perf/poller_perf.o 00:04:16.615 CC test/nvme/aer/aer.o 00:04:16.615 TEST_HEADER include/spdk/nbd.h 00:04:16.615 TEST_HEADER include/spdk/notify.h 00:04:16.615 TEST_HEADER include/spdk/nvme.h 00:04:16.615 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.615 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.615 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.615 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.615 CC examples/blob/cli/blobcli.o 00:04:16.615 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.615 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.615 CC test/accel/dif/dif.o 00:04:16.615 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.615 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.615 CC test/bdev/bdevio/bdevio.o 00:04:16.615 TEST_HEADER include/spdk/nvmf.h 00:04:16.615 CC test/blobfs/mkfs/mkfs.o 00:04:16.615 CC examples/thread/thread/thread_ex.o 00:04:16.615 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.615 CC examples/blob/hello_world/hello_blob.o 00:04:16.615 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.615 CC test/dma/test_dma/test_dma.o 00:04:16.615 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.615 CC examples/nvmf/nvmf/nvmf.o 00:04:16.615 TEST_HEADER include/spdk/opal.h 00:04:16.615 TEST_HEADER include/spdk/opal_spec.h 00:04:16.615 CC test/app/bdev_svc/bdev_svc.o 00:04:16.615 TEST_HEADER include/spdk/pci_ids.h 00:04:16.615 TEST_HEADER include/spdk/pipe.h 00:04:16.615 TEST_HEADER include/spdk/queue.h 00:04:16.615 TEST_HEADER include/spdk/reduce.h 00:04:16.615 TEST_HEADER include/spdk/rpc.h 00:04:16.615 TEST_HEADER include/spdk/scheduler.h 00:04:16.615 TEST_HEADER include/spdk/scsi.h 00:04:16.615 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.615 TEST_HEADER include/spdk/sock.h 00:04:16.615 TEST_HEADER include/spdk/stdinc.h 00:04:16.615 TEST_HEADER include/spdk/string.h 00:04:16.615 TEST_HEADER include/spdk/thread.h 00:04:16.615 TEST_HEADER include/spdk/trace.h 00:04:16.615 TEST_HEADER include/spdk/trace_parser.h 00:04:16.615 TEST_HEADER include/spdk/tree.h 00:04:16.615 TEST_HEADER include/spdk/ublk.h 00:04:16.615 TEST_HEADER include/spdk/util.h 00:04:16.615 CC test/lvol/esnap/esnap.o 00:04:16.615 TEST_HEADER include/spdk/uuid.h 00:04:16.615 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.615 LINK spdk_lspci 00:04:16.615 TEST_HEADER include/spdk/version.h 00:04:16.615 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.615 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.615 TEST_HEADER include/spdk/vhost.h 00:04:16.615 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:16.615 TEST_HEADER include/spdk/vmd.h 00:04:16.615 TEST_HEADER include/spdk/xor.h 00:04:16.615 TEST_HEADER include/spdk/zipf.h 00:04:16.615 CXX test/cpp_headers/accel.o 00:04:16.901 LINK rpc_client_test 00:04:16.901 LINK spdk_nvme_discover 00:04:16.901 LINK lsvmd 00:04:16.901 LINK interrupt_tgt 00:04:16.901 LINK histogram_perf 00:04:16.901 LINK zipf 00:04:16.901 LINK vtophys 00:04:16.901 LINK nvmf_tgt 00:04:16.901 LINK jsoncat 00:04:16.901 LINK event_perf 00:04:16.901 LINK env_dpdk_post_init 00:04:16.901 LINK poller_perf 00:04:16.901 LINK vhost 00:04:16.901 LINK spdk_trace_record 00:04:16.901 LINK stub 00:04:16.901 LINK iscsi_tgt 00:04:16.901 LINK verify 00:04:16.901 LINK spdk_tgt 00:04:16.901 LINK ioat_perf 00:04:16.901 LINK bdev_svc 00:04:16.901 LINK hello_world 00:04:16.901 LINK hello_sock 00:04:16.901 LINK mkfs 00:04:17.174 LINK hello_blob 00:04:17.174 LINK hello_bdev 00:04:17.174 LINK thread 00:04:17.174 CXX test/cpp_headers/accel_module.o 00:04:17.174 LINK aer 00:04:17.174 CXX test/cpp_headers/assert.o 00:04:17.174 LINK spdk_dd 00:04:17.174 LINK nvmf 00:04:17.174 CC examples/vmd/led/led.o 00:04:17.174 LINK idxd_perf 00:04:17.174 CC test/nvme/reset/reset.o 00:04:17.174 LINK spdk_trace 00:04:17.174 LINK pci_ut 00:04:17.174 CXX test/cpp_headers/barrier.o 00:04:17.174 CXX test/cpp_headers/base64.o 00:04:17.174 CC test/nvme/sgl/sgl.o 00:04:17.174 CC test/event/reactor/reactor.o 00:04:17.174 CC examples/nvme/reconnect/reconnect.o 00:04:17.174 CXX test/cpp_headers/bdev.o 00:04:17.175 CC test/nvme/e2edp/nvme_dp.o 00:04:17.443 CC examples/nvme/arbitration/arbitration.o 00:04:17.443 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:17.443 LINK test_dma 00:04:17.443 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.443 CC examples/nvme/hotplug/hotplug.o 00:04:17.443 CC app/fio/bdev/fio_plugin.o 00:04:17.443 LINK bdevio 00:04:17.443 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.443 CC test/event/reactor_perf/reactor_perf.o 00:04:17.443 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.443 LINK dif 00:04:17.443 CC test/event/app_repeat/app_repeat.o 00:04:17.443 LINK accel_perf 00:04:17.443 LINK led 00:04:17.443 CC test/nvme/overhead/overhead.o 00:04:17.443 LINK nvme_fuzz 00:04:17.443 CC test/event/scheduler/scheduler.o 00:04:17.443 CC test/nvme/err_injection/err_injection.o 00:04:17.443 CXX test/cpp_headers/bdev_module.o 00:04:17.443 LINK blobcli 00:04:17.443 LINK spdk_nvme 00:04:17.443 CC test/nvme/startup/startup.o 00:04:17.705 CC test/nvme/reserve/reserve.o 00:04:17.705 CC test/nvme/simple_copy/simple_copy.o 00:04:17.705 LINK reactor 00:04:17.705 CC test/nvme/connect_stress/connect_stress.o 00:04:17.705 CXX test/cpp_headers/bdev_zone.o 00:04:17.705 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:17.705 CXX test/cpp_headers/bit_array.o 00:04:17.705 CC test/nvme/boot_partition/boot_partition.o 00:04:17.705 CC examples/nvme/abort/abort.o 00:04:17.705 CXX test/cpp_headers/bit_pool.o 00:04:17.705 CC test/nvme/compliance/nvme_compliance.o 00:04:17.705 LINK reactor_perf 00:04:17.705 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:17.705 CC test/nvme/fused_ordering/fused_ordering.o 00:04:17.705 CXX test/cpp_headers/blob_bdev.o 00:04:17.705 LINK app_repeat 00:04:17.705 CXX test/cpp_headers/blobfs_bdev.o 00:04:17.705 CXX test/cpp_headers/blobfs.o 00:04:17.705 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:17.705 CXX test/cpp_headers/blob.o 00:04:17.705 LINK reset 00:04:17.705 CXX test/cpp_headers/conf.o 00:04:17.705 LINK hotplug 00:04:17.705 LINK sgl 00:04:17.705 CXX test/cpp_headers/config.o 00:04:17.971 CXX test/cpp_headers/cpuset.o 00:04:17.971 LINK nvme_dp 00:04:17.971 LINK mem_callbacks 00:04:17.971 CXX test/cpp_headers/crc16.o 00:04:17.971 CC test/nvme/fdp/fdp.o 00:04:17.971 LINK err_injection 00:04:17.971 LINK spdk_nvme_perf 00:04:17.971 CXX test/cpp_headers/crc32.o 00:04:17.971 CC test/nvme/cuse/cuse.o 00:04:17.971 CXX test/cpp_headers/crc64.o 00:04:17.971 LINK startup 00:04:17.971 CXX test/cpp_headers/dif.o 00:04:17.971 LINK spdk_nvme_identify 00:04:17.971 LINK scheduler 00:04:17.971 LINK reconnect 00:04:17.971 LINK arbitration 00:04:17.971 LINK connect_stress 00:04:17.971 CXX test/cpp_headers/dma.o 00:04:17.971 LINK boot_partition 00:04:17.971 LINK cmb_copy 00:04:17.971 LINK reserve 00:04:17.971 CXX test/cpp_headers/endian.o 00:04:17.971 CXX test/cpp_headers/env_dpdk.o 00:04:17.971 CXX test/cpp_headers/env.o 00:04:17.971 LINK simple_copy 00:04:17.971 LINK bdevperf 00:04:17.971 LINK overhead 00:04:17.971 LINK spdk_top 00:04:17.971 LINK pmr_persistence 00:04:17.971 CXX test/cpp_headers/event.o 00:04:18.236 CXX test/cpp_headers/fd_group.o 00:04:18.236 CXX test/cpp_headers/fd.o 00:04:18.236 CXX test/cpp_headers/file.o 00:04:18.236 CXX test/cpp_headers/ftl.o 00:04:18.236 CXX test/cpp_headers/gpt_spec.o 00:04:18.236 CXX test/cpp_headers/hexlify.o 00:04:18.236 CXX test/cpp_headers/histogram_data.o 00:04:18.236 CXX test/cpp_headers/idxd.o 00:04:18.236 LINK fused_ordering 00:04:18.236 CXX test/cpp_headers/idxd_spec.o 00:04:18.236 CXX test/cpp_headers/init.o 00:04:18.236 LINK doorbell_aers 00:04:18.236 CXX test/cpp_headers/ioat.o 00:04:18.236 CXX test/cpp_headers/ioat_spec.o 00:04:18.236 CXX test/cpp_headers/iscsi_spec.o 00:04:18.236 LINK vhost_fuzz 00:04:18.236 CXX test/cpp_headers/json.o 00:04:18.236 CXX test/cpp_headers/jsonrpc.o 00:04:18.236 LINK nvme_manage 00:04:18.236 CXX test/cpp_headers/keyring.o 00:04:18.236 CXX test/cpp_headers/keyring_module.o 00:04:18.236 LINK spdk_bdev 00:04:18.236 CXX test/cpp_headers/likely.o 00:04:18.236 CXX test/cpp_headers/log.o 00:04:18.236 CXX test/cpp_headers/lvol.o 00:04:18.236 CXX test/cpp_headers/memory.o 00:04:18.236 LINK nvme_compliance 00:04:18.236 CXX test/cpp_headers/mmio.o 00:04:18.236 CXX test/cpp_headers/nbd.o 00:04:18.236 CXX test/cpp_headers/notify.o 00:04:18.236 CXX test/cpp_headers/nvme.o 00:04:18.236 CXX test/cpp_headers/nvme_intel.o 00:04:18.503 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.503 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.503 CXX test/cpp_headers/nvme_spec.o 00:04:18.503 CXX test/cpp_headers/nvme_zns.o 00:04:18.503 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.503 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.503 LINK abort 00:04:18.503 CXX test/cpp_headers/nvmf.o 00:04:18.503 CXX test/cpp_headers/nvmf_spec.o 00:04:18.503 CXX test/cpp_headers/nvmf_transport.o 00:04:18.503 CXX test/cpp_headers/opal.o 00:04:18.503 CXX test/cpp_headers/opal_spec.o 00:04:18.503 CXX test/cpp_headers/pci_ids.o 00:04:18.503 CXX test/cpp_headers/pipe.o 00:04:18.503 CXX test/cpp_headers/queue.o 00:04:18.503 CXX test/cpp_headers/reduce.o 00:04:18.503 CXX test/cpp_headers/rpc.o 00:04:18.503 CXX test/cpp_headers/scheduler.o 00:04:18.503 CXX test/cpp_headers/scsi.o 00:04:18.503 CXX test/cpp_headers/sock.o 00:04:18.503 CXX test/cpp_headers/scsi_spec.o 00:04:18.503 CXX test/cpp_headers/stdinc.o 00:04:18.503 LINK fdp 00:04:18.503 CXX test/cpp_headers/string.o 00:04:18.503 CXX test/cpp_headers/thread.o 00:04:18.503 CXX test/cpp_headers/trace.o 00:04:18.503 CXX test/cpp_headers/trace_parser.o 00:04:18.503 CXX test/cpp_headers/tree.o 00:04:18.503 CXX test/cpp_headers/ublk.o 00:04:18.503 CXX test/cpp_headers/util.o 00:04:18.503 CXX test/cpp_headers/uuid.o 00:04:18.762 CXX test/cpp_headers/version.o 00:04:18.762 CXX test/cpp_headers/vfio_user_pci.o 00:04:18.762 CXX test/cpp_headers/vfio_user_spec.o 00:04:18.762 CXX test/cpp_headers/vhost.o 00:04:18.762 CXX test/cpp_headers/vmd.o 00:04:18.762 LINK memory_ut 00:04:18.762 CXX test/cpp_headers/xor.o 00:04:18.762 CXX test/cpp_headers/zipf.o 00:04:19.693 LINK cuse 00:04:19.693 LINK iscsi_fuzz 00:04:22.973 LINK esnap 00:04:22.973 00:04:22.973 real 0m40.293s 00:04:22.973 user 7m34.995s 00:04:22.973 sys 1m49.633s 00:04:22.973 06:33:10 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:22.973 06:33:10 make -- common/autotest_common.sh@10 -- $ set +x 00:04:22.973 ************************************ 00:04:22.973 END TEST make 00:04:22.973 ************************************ 00:04:22.973 06:33:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:22.973 06:33:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:22.973 06:33:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:22.973 06:33:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.973 06:33:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:22.973 06:33:10 -- pm/common@44 -- $ pid=401907 00:04:22.973 06:33:10 -- pm/common@50 -- $ kill -TERM 401907 00:04:22.973 06:33:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.973 06:33:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:22.973 06:33:10 -- pm/common@44 -- $ pid=401909 00:04:22.973 06:33:10 -- pm/common@50 -- $ kill -TERM 401909 00:04:22.973 06:33:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.973 06:33:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:22.973 06:33:10 -- pm/common@44 -- $ pid=401911 00:04:22.973 06:33:10 -- pm/common@50 -- $ kill -TERM 401911 00:04:22.973 06:33:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.973 06:33:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:22.973 06:33:10 -- pm/common@44 -- $ pid=401940 00:04:22.973 06:33:10 -- pm/common@50 -- $ sudo -E kill -TERM 401940 00:04:22.973 06:33:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.973 06:33:10 -- nvmf/common.sh@7 -- # uname -s 00:04:22.973 06:33:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.973 06:33:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.973 06:33:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.973 06:33:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.973 06:33:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.973 06:33:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.973 06:33:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.973 06:33:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.973 06:33:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.973 06:33:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.973 06:33:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.973 06:33:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.973 06:33:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.973 06:33:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.973 06:33:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:22.973 06:33:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.973 06:33:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.973 06:33:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.973 06:33:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.973 06:33:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.973 06:33:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.974 06:33:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.974 06:33:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.974 06:33:10 -- paths/export.sh@5 -- # export PATH 00:04:22.974 06:33:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.974 06:33:10 -- nvmf/common.sh@47 -- # : 0 00:04:22.974 06:33:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.974 06:33:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.974 06:33:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.974 06:33:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.974 06:33:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.974 06:33:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.974 06:33:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.974 06:33:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.974 06:33:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:22.974 06:33:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:22.974 06:33:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:22.974 06:33:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:22.974 06:33:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:22.974 06:33:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:22.974 06:33:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:22.974 06:33:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:22.974 06:33:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:22.974 06:33:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:22.974 06:33:10 -- spdk/autotest.sh@48 -- # udevadm_pid=479127 00:04:22.974 06:33:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:22.974 06:33:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:22.974 06:33:10 -- pm/common@17 -- # local monitor 00:04:22.974 06:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.974 06:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.974 06:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.974 06:33:10 -- pm/common@21 -- # date +%s 00:04:22.974 06:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.974 06:33:10 -- pm/common@21 -- # date +%s 00:04:22.974 06:33:10 -- pm/common@25 -- # sleep 1 00:04:22.974 06:33:10 -- pm/common@21 -- # date +%s 00:04:22.974 06:33:10 -- pm/common@21 -- # date +%s 00:04:22.974 06:33:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721017990 00:04:22.974 06:33:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721017990 00:04:22.974 06:33:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721017990 00:04:22.974 06:33:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721017990 00:04:22.974 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721017990_collect-vmstat.pm.log 00:04:22.974 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721017990_collect-cpu-load.pm.log 00:04:22.974 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721017990_collect-cpu-temp.pm.log 00:04:22.974 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721017990_collect-bmc-pm.bmc.pm.log 00:04:24.351 06:33:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.351 06:33:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.351 06:33:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.351 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:04:24.351 06:33:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.351 06:33:11 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:24.351 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:04:24.351 06:33:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:24.351 06:33:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.351 06:33:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.351 06:33:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:24.351 06:33:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.351 06:33:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.351 06:33:11 -- common/autotest_common.sh@1451 -- # uname 00:04:24.351 06:33:11 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:24.351 06:33:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.351 06:33:11 -- common/autotest_common.sh@1471 -- # uname 00:04:24.351 06:33:11 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:24.351 06:33:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:24.351 06:33:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:24.351 06:33:11 -- spdk/autotest.sh@72 -- # hash lcov 00:04:24.351 06:33:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:24.351 06:33:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:24.351 --rc lcov_branch_coverage=1 00:04:24.351 --rc lcov_function_coverage=1 00:04:24.351 --rc genhtml_branch_coverage=1 00:04:24.351 --rc genhtml_function_coverage=1 00:04:24.351 --rc genhtml_legend=1 00:04:24.351 --rc geninfo_all_blocks=1 00:04:24.351 ' 00:04:24.351 06:33:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:24.351 --rc lcov_branch_coverage=1 00:04:24.351 --rc lcov_function_coverage=1 00:04:24.351 --rc genhtml_branch_coverage=1 00:04:24.351 --rc genhtml_function_coverage=1 00:04:24.351 --rc genhtml_legend=1 00:04:24.351 --rc geninfo_all_blocks=1 00:04:24.351 ' 00:04:24.351 06:33:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:24.351 --rc lcov_branch_coverage=1 00:04:24.351 --rc lcov_function_coverage=1 00:04:24.351 --rc genhtml_branch_coverage=1 00:04:24.351 --rc genhtml_function_coverage=1 00:04:24.351 --rc genhtml_legend=1 00:04:24.351 --rc geninfo_all_blocks=1 00:04:24.351 --no-external' 00:04:24.351 06:33:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:24.351 --rc lcov_branch_coverage=1 00:04:24.351 --rc lcov_function_coverage=1 00:04:24.351 --rc genhtml_branch_coverage=1 00:04:24.351 --rc genhtml_function_coverage=1 00:04:24.351 --rc genhtml_legend=1 00:04:24.351 --rc geninfo_all_blocks=1 00:04:24.351 --no-external' 00:04:24.351 06:33:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:24.351 lcov: LCOV version 1.14 00:04:24.351 06:33:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:39.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.229 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:54.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:54.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:57.447 06:33:44 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:57.447 06:33:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.447 06:33:44 -- common/autotest_common.sh@10 -- # set +x 00:04:57.447 06:33:44 -- spdk/autotest.sh@91 -- # rm -f 00:04:57.447 06:33:44 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.382 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:58.382 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:58.382 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:58.382 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:58.382 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:58.382 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:58.382 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:58.382 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:58.382 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:58.382 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:58.382 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:58.382 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:58.382 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:58.382 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:58.382 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:58.382 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:58.382 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:58.641 06:33:46 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:58.641 06:33:46 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:58.641 06:33:46 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:58.641 06:33:46 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:58.641 06:33:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.641 06:33:46 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:58.641 06:33:46 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:58.641 06:33:46 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.641 06:33:46 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.641 06:33:46 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:58.641 06:33:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.641 06:33:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.641 06:33:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:58.641 06:33:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:58.641 06:33:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:58.641 No valid GPT data, bailing 00:04:58.641 06:33:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.641 06:33:46 -- scripts/common.sh@391 -- # pt= 00:04:58.641 06:33:46 -- scripts/common.sh@392 -- # return 1 00:04:58.641 06:33:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:58.641 1+0 records in 00:04:58.641 1+0 records out 00:04:58.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00240939 s, 435 MB/s 00:04:58.641 06:33:46 -- spdk/autotest.sh@118 -- # sync 00:04:58.641 06:33:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:58.641 06:33:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:58.641 06:33:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.543 06:33:47 -- spdk/autotest.sh@124 -- # uname -s 00:05:00.543 06:33:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:00.543 06:33:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:00.543 06:33:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.543 06:33:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.543 06:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.543 ************************************ 00:05:00.543 START TEST setup.sh 00:05:00.543 ************************************ 00:05:00.543 06:33:47 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:00.543 * Looking for test storage... 00:05:00.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.543 06:33:48 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:00.543 06:33:48 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:00.543 06:33:48 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:00.543 06:33:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.543 06:33:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.543 06:33:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:00.543 ************************************ 00:05:00.543 START TEST acl 00:05:00.543 ************************************ 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:00.543 * Looking for test storage... 00:05:00.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.543 06:33:48 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:00.543 06:33:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:00.543 06:33:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.543 06:33:48 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.916 06:33:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:01.916 06:33:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:01.916 06:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.916 06:33:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:01.916 06:33:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.916 06:33:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:03.288 Hugepages 00:05:03.288 node hugesize free / total 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 00:05:03.288 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.288 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:03.289 06:33:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:03.289 06:33:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.289 06:33:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.289 06:33:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.289 ************************************ 00:05:03.289 START TEST denied 00:05:03.289 ************************************ 00:05:03.289 06:33:50 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:03.289 06:33:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:05:03.289 06:33:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:03.289 06:33:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:05:03.289 06:33:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.289 06:33:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.665 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.665 06:33:52 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.195 00:05:07.195 real 0m3.797s 00:05:07.195 user 0m1.119s 00:05:07.195 sys 0m1.766s 00:05:07.195 06:33:54 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.195 06:33:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:07.195 ************************************ 00:05:07.195 END TEST denied 00:05:07.195 ************************************ 00:05:07.196 06:33:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:07.196 06:33:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.196 06:33:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.196 06:33:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.196 ************************************ 00:05:07.196 START TEST allowed 00:05:07.196 ************************************ 00:05:07.196 06:33:54 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:07.196 06:33:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:05:07.196 06:33:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:07.196 06:33:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:05:07.196 06:33:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.196 06:33:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.727 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.727 06:33:56 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:09.727 06:33:56 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:09.727 06:33:56 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:09.727 06:33:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.727 06:33:56 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.102 00:05:11.102 real 0m3.835s 00:05:11.102 user 0m0.971s 00:05:11.102 sys 0m1.675s 00:05:11.102 06:33:58 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.102 06:33:58 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:11.102 ************************************ 00:05:11.102 END TEST allowed 00:05:11.102 ************************************ 00:05:11.102 00:05:11.102 real 0m10.360s 00:05:11.102 user 0m3.142s 00:05:11.102 sys 0m5.184s 00:05:11.102 06:33:58 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.102 06:33:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:11.102 ************************************ 00:05:11.102 END TEST acl 00:05:11.102 ************************************ 00:05:11.102 06:33:58 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:11.102 06:33:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.102 06:33:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.102 06:33:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:11.102 ************************************ 00:05:11.102 START TEST hugepages 00:05:11.103 ************************************ 00:05:11.103 06:33:58 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:11.103 * Looking for test storage... 00:05:11.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 41197912 kB' 'MemAvailable: 44705308 kB' 'Buffers: 2704 kB' 'Cached: 12737248 kB' 'SwapCached: 0 kB' 'Active: 9731812 kB' 'Inactive: 3506596 kB' 'Active(anon): 9337220 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501776 kB' 'Mapped: 179176 kB' 'Shmem: 8838764 kB' 'KReclaimable: 200096 kB' 'Slab: 573060 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 372964 kB' 'KernelStack: 12944 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 10452752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.103 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:11.104 06:33:58 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:11.104 06:33:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.104 06:33:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.104 06:33:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.104 ************************************ 00:05:11.104 START TEST default_setup 00:05:11.104 ************************************ 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.104 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.105 06:33:58 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.479 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.479 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.479 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.452 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43340400 kB' 'MemAvailable: 46847752 kB' 'Buffers: 2704 kB' 'Cached: 12737352 kB' 'SwapCached: 0 kB' 'Active: 9744520 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349928 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514360 kB' 'Mapped: 178956 kB' 'Shmem: 8838868 kB' 'KReclaimable: 200008 kB' 'Slab: 572976 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372968 kB' 'KernelStack: 12640 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10468952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.452 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.453 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43339960 kB' 'MemAvailable: 46847312 kB' 'Buffers: 2704 kB' 'Cached: 12737352 kB' 'SwapCached: 0 kB' 'Active: 9745256 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350664 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515060 kB' 'Mapped: 178884 kB' 'Shmem: 8838868 kB' 'KReclaimable: 200008 kB' 'Slab: 572960 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12672 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10468968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.454 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.455 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43339712 kB' 'MemAvailable: 46847064 kB' 'Buffers: 2704 kB' 'Cached: 12737372 kB' 'SwapCached: 0 kB' 'Active: 9744832 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514596 kB' 'Mapped: 178808 kB' 'Shmem: 8838888 kB' 'KReclaimable: 200008 kB' 'Slab: 572960 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12672 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10468992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.456 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.458 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.459 nr_hugepages=1024 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.459 resv_hugepages=0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.459 surplus_hugepages=0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.459 anon_hugepages=0 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43339940 kB' 'MemAvailable: 46847292 kB' 'Buffers: 2704 kB' 'Cached: 12737392 kB' 'SwapCached: 0 kB' 'Active: 9744920 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350328 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514660 kB' 'Mapped: 178808 kB' 'Shmem: 8838908 kB' 'KReclaimable: 200008 kB' 'Slab: 572960 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12688 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.461 06:34:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19680880 kB' 'MemUsed: 13196060 kB' 'SwapCached: 0 kB' 'Active: 6690720 kB' 'Inactive: 3265212 kB' 'Active(anon): 6502148 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667304 kB' 'Mapped: 71568 kB' 'AnonPages: 291760 kB' 'Shmem: 6213520 kB' 'KernelStack: 7128 kB' 'PageTables: 5296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313392 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.462 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.463 node0=1024 expecting 1024 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.463 00:05:13.463 real 0m2.462s 00:05:13.463 user 0m0.689s 00:05:13.463 sys 0m0.891s 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.463 06:34:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:13.463 ************************************ 00:05:13.463 END TEST default_setup 00:05:13.463 ************************************ 00:05:13.463 06:34:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:13.463 06:34:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.463 06:34:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.463 06:34:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.722 ************************************ 00:05:13.722 START TEST per_node_1G_alloc 00:05:13.722 ************************************ 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.722 06:34:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.657 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.657 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.657 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.657 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.657 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.657 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.657 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.657 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.657 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.657 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.657 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.657 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.657 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.657 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.657 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.657 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.657 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43354068 kB' 'MemAvailable: 46861420 kB' 'Buffers: 2704 kB' 'Cached: 12737460 kB' 'SwapCached: 0 kB' 'Active: 9745532 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515212 kB' 'Mapped: 178972 kB' 'Shmem: 8838976 kB' 'KReclaimable: 200008 kB' 'Slab: 573096 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373088 kB' 'KernelStack: 12768 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.657 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.920 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43355356 kB' 'MemAvailable: 46862708 kB' 'Buffers: 2704 kB' 'Cached: 12737460 kB' 'SwapCached: 0 kB' 'Active: 9745244 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350652 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514896 kB' 'Mapped: 178908 kB' 'Shmem: 8838976 kB' 'KReclaimable: 200008 kB' 'Slab: 573096 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373088 kB' 'KernelStack: 12768 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.921 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.922 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43355808 kB' 'MemAvailable: 46863160 kB' 'Buffers: 2704 kB' 'Cached: 12737484 kB' 'SwapCached: 0 kB' 'Active: 9745108 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350516 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514788 kB' 'Mapped: 178832 kB' 'Shmem: 8839000 kB' 'KReclaimable: 200008 kB' 'Slab: 573068 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373060 kB' 'KernelStack: 12784 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.923 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.924 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.925 nr_hugepages=1024 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.925 resv_hugepages=0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.925 surplus_hugepages=0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.925 anon_hugepages=0 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43355808 kB' 'MemAvailable: 46863160 kB' 'Buffers: 2704 kB' 'Cached: 12737508 kB' 'SwapCached: 0 kB' 'Active: 9744940 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350348 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514540 kB' 'Mapped: 178832 kB' 'Shmem: 8839024 kB' 'KReclaimable: 200008 kB' 'Slab: 573068 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373060 kB' 'KernelStack: 12752 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.925 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.926 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20728556 kB' 'MemUsed: 12148384 kB' 'SwapCached: 0 kB' 'Active: 6690920 kB' 'Inactive: 3265212 kB' 'Active(anon): 6502348 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667308 kB' 'Mapped: 71592 kB' 'AnonPages: 291952 kB' 'Shmem: 6213524 kB' 'KernelStack: 7192 kB' 'PageTables: 5348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313480 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.927 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22627404 kB' 'MemUsed: 5037348 kB' 'SwapCached: 0 kB' 'Active: 3054304 kB' 'Inactive: 241384 kB' 'Active(anon): 2848284 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3072948 kB' 'Mapped: 107240 kB' 'AnonPages: 222836 kB' 'Shmem: 2625544 kB' 'KernelStack: 5592 kB' 'PageTables: 2624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84116 kB' 'Slab: 259588 kB' 'SReclaimable: 84116 kB' 'SUnreclaim: 175472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.928 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.929 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.930 node0=512 expecting 512 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:14.930 node1=512 expecting 512 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:14.930 00:05:14.930 real 0m1.343s 00:05:14.930 user 0m0.560s 00:05:14.930 sys 0m0.744s 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.930 06:34:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.930 ************************************ 00:05:14.930 END TEST per_node_1G_alloc 00:05:14.930 ************************************ 00:05:14.930 06:34:02 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:14.930 06:34:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.930 06:34:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.930 06:34:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.930 ************************************ 00:05:14.930 START TEST even_2G_alloc 00:05:14.930 ************************************ 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.930 06:34:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.864 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:15.864 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:15.864 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:15.864 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:15.864 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:15.864 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:15.864 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:15.864 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:15.864 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:15.864 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:15.864 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:15.864 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.127 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.127 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.127 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.127 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.127 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43347000 kB' 'MemAvailable: 46854352 kB' 'Buffers: 2704 kB' 'Cached: 12737604 kB' 'SwapCached: 0 kB' 'Active: 9745100 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350508 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514544 kB' 'Mapped: 178848 kB' 'Shmem: 8839120 kB' 'KReclaimable: 200008 kB' 'Slab: 572960 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12768 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.127 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.128 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43346500 kB' 'MemAvailable: 46853852 kB' 'Buffers: 2704 kB' 'Cached: 12737608 kB' 'SwapCached: 0 kB' 'Active: 9745472 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350880 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514900 kB' 'Mapped: 178840 kB' 'Shmem: 8839124 kB' 'KReclaimable: 200008 kB' 'Slab: 572960 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12832 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.129 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.130 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43346924 kB' 'MemAvailable: 46854276 kB' 'Buffers: 2704 kB' 'Cached: 12737620 kB' 'SwapCached: 0 kB' 'Active: 9745736 kB' 'Inactive: 3506596 kB' 'Active(anon): 9351144 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515192 kB' 'Mapped: 178840 kB' 'Shmem: 8839136 kB' 'KReclaimable: 200008 kB' 'Slab: 572976 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372968 kB' 'KernelStack: 12848 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.131 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.132 nr_hugepages=1024 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.132 resv_hugepages=0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.132 surplus_hugepages=0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.132 anon_hugepages=0 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.132 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43346924 kB' 'MemAvailable: 46854276 kB' 'Buffers: 2704 kB' 'Cached: 12737644 kB' 'SwapCached: 0 kB' 'Active: 9745448 kB' 'Inactive: 3506596 kB' 'Active(anon): 9350856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514880 kB' 'Mapped: 178840 kB' 'Shmem: 8839160 kB' 'KReclaimable: 200008 kB' 'Slab: 572956 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372948 kB' 'KernelStack: 12832 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10469396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.133 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20719248 kB' 'MemUsed: 12157692 kB' 'SwapCached: 0 kB' 'Active: 6690604 kB' 'Inactive: 3265212 kB' 'Active(anon): 6502032 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667312 kB' 'Mapped: 71600 kB' 'AnonPages: 291644 kB' 'Shmem: 6213528 kB' 'KernelStack: 7160 kB' 'PageTables: 5252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313416 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.395 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.396 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22628064 kB' 'MemUsed: 5036688 kB' 'SwapCached: 0 kB' 'Active: 3055024 kB' 'Inactive: 241384 kB' 'Active(anon): 2849004 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3073044 kB' 'Mapped: 107240 kB' 'AnonPages: 223484 kB' 'Shmem: 2625640 kB' 'KernelStack: 5656 kB' 'PageTables: 2668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84116 kB' 'Slab: 259536 kB' 'SReclaimable: 84116 kB' 'SUnreclaim: 175420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.397 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:16.398 node0=512 expecting 512 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:16.398 node1=512 expecting 512 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:16.398 00:05:16.398 real 0m1.344s 00:05:16.398 user 0m0.563s 00:05:16.398 sys 0m0.725s 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.398 06:34:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.398 END TEST even_2G_alloc 00:05:16.398 ************************************ 00:05:16.398 06:34:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:16.398 06:34:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.398 06:34:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.398 06:34:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.399 START TEST odd_alloc 00:05:16.399 ************************************ 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.399 06:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.333 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.334 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:17.334 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.334 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.334 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.334 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.334 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.334 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.334 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.334 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.596 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.596 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.596 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.596 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.596 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.596 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.596 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43351716 kB' 'MemAvailable: 46859068 kB' 'Buffers: 2704 kB' 'Cached: 12737736 kB' 'SwapCached: 0 kB' 'Active: 9743908 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513284 kB' 'Mapped: 178828 kB' 'Shmem: 8839252 kB' 'KReclaimable: 200008 kB' 'Slab: 572888 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372880 kB' 'KernelStack: 12784 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10488316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.596 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43354932 kB' 'MemAvailable: 46862284 kB' 'Buffers: 2704 kB' 'Cached: 12737736 kB' 'SwapCached: 0 kB' 'Active: 9742804 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348212 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512144 kB' 'Mapped: 178816 kB' 'Shmem: 8839252 kB' 'KReclaimable: 200008 kB' 'Slab: 572880 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372872 kB' 'KernelStack: 12752 kB' 'PageTables: 7364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10488336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.597 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.598 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43357792 kB' 'MemAvailable: 46865144 kB' 'Buffers: 2704 kB' 'Cached: 12737756 kB' 'SwapCached: 0 kB' 'Active: 9742884 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512284 kB' 'Mapped: 178696 kB' 'Shmem: 8839272 kB' 'KReclaimable: 200008 kB' 'Slab: 572848 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372840 kB' 'KernelStack: 12816 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10494316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.599 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:17.601 nr_hugepages=1025 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.601 resv_hugepages=0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.601 surplus_hugepages=0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.601 anon_hugepages=0 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43358044 kB' 'MemAvailable: 46865396 kB' 'Buffers: 2704 kB' 'Cached: 12737776 kB' 'SwapCached: 0 kB' 'Active: 9743232 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348640 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512692 kB' 'Mapped: 178696 kB' 'Shmem: 8839292 kB' 'KReclaimable: 200008 kB' 'Slab: 572848 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372840 kB' 'KernelStack: 12816 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10488748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.601 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.602 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20726644 kB' 'MemUsed: 12150296 kB' 'SwapCached: 0 kB' 'Active: 6688188 kB' 'Inactive: 3265212 kB' 'Active(anon): 6499616 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667320 kB' 'Mapped: 71060 kB' 'AnonPages: 289236 kB' 'Shmem: 6213536 kB' 'KernelStack: 7160 kB' 'PageTables: 5088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313332 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.863 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.864 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22631644 kB' 'MemUsed: 5033108 kB' 'SwapCached: 0 kB' 'Active: 3054956 kB' 'Inactive: 241384 kB' 'Active(anon): 2848936 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3073204 kB' 'Mapped: 107636 kB' 'AnonPages: 223252 kB' 'Shmem: 2625800 kB' 'KernelStack: 5704 kB' 'PageTables: 2656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84116 kB' 'Slab: 259516 kB' 'SReclaimable: 84116 kB' 'SUnreclaim: 175400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.865 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:17.866 node0=512 expecting 513 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:17.866 node1=513 expecting 512 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:17.866 00:05:17.866 real 0m1.410s 00:05:17.866 user 0m0.580s 00:05:17.866 sys 0m0.787s 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.866 06:34:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 ************************************ 00:05:17.866 END TEST odd_alloc 00:05:17.866 ************************************ 00:05:17.866 06:34:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:17.866 06:34:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.866 06:34:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.866 06:34:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 ************************************ 00:05:17.866 START TEST custom_alloc 00:05:17.866 ************************************ 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:17.866 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.867 06:34:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:18.801 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.801 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:18.801 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.801 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.801 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.801 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.801 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.801 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.801 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:18.801 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.801 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.801 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.801 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.801 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.801 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.801 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.801 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42290748 kB' 'MemAvailable: 45798100 kB' 'Buffers: 2704 kB' 'Cached: 12737868 kB' 'SwapCached: 0 kB' 'Active: 9748876 kB' 'Inactive: 3506596 kB' 'Active(anon): 9354284 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518440 kB' 'Mapped: 179624 kB' 'Shmem: 8839384 kB' 'KReclaimable: 200008 kB' 'Slab: 572852 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372844 kB' 'KernelStack: 13328 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10494736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196560 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42287320 kB' 'MemAvailable: 45794672 kB' 'Buffers: 2704 kB' 'Cached: 12737868 kB' 'SwapCached: 0 kB' 'Active: 9749656 kB' 'Inactive: 3506596 kB' 'Active(anon): 9355064 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518816 kB' 'Mapped: 179724 kB' 'Shmem: 8839384 kB' 'KReclaimable: 200008 kB' 'Slab: 572828 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372820 kB' 'KernelStack: 12944 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10495084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42290732 kB' 'MemAvailable: 45798084 kB' 'Buffers: 2704 kB' 'Cached: 12737892 kB' 'SwapCached: 0 kB' 'Active: 9746236 kB' 'Inactive: 3506596 kB' 'Active(anon): 9351644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515584 kB' 'Mapped: 179192 kB' 'Shmem: 8839408 kB' 'KReclaimable: 200008 kB' 'Slab: 572948 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372940 kB' 'KernelStack: 12960 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10492852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:19.070 nr_hugepages=1536 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.070 resv_hugepages=0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.070 surplus_hugepages=0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.070 anon_hugepages=0 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42287360 kB' 'MemAvailable: 45794712 kB' 'Buffers: 2704 kB' 'Cached: 12737912 kB' 'SwapCached: 0 kB' 'Active: 9748932 kB' 'Inactive: 3506596 kB' 'Active(anon): 9354340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518160 kB' 'Mapped: 179624 kB' 'Shmem: 8839428 kB' 'KReclaimable: 200008 kB' 'Slab: 572940 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372932 kB' 'KernelStack: 12880 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10495128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.071 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20717024 kB' 'MemUsed: 12159916 kB' 'SwapCached: 0 kB' 'Active: 6688212 kB' 'Inactive: 3265212 kB' 'Active(anon): 6499640 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667380 kB' 'Mapped: 71072 kB' 'AnonPages: 289200 kB' 'Shmem: 6213596 kB' 'KernelStack: 7160 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313416 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.072 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 21570084 kB' 'MemUsed: 6094668 kB' 'SwapCached: 0 kB' 'Active: 3055148 kB' 'Inactive: 241384 kB' 'Active(anon): 2849128 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3073256 kB' 'Mapped: 107636 kB' 'AnonPages: 223360 kB' 'Shmem: 2625852 kB' 'KernelStack: 5736 kB' 'PageTables: 2704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84116 kB' 'Slab: 259524 kB' 'SReclaimable: 84116 kB' 'SUnreclaim: 175408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.074 node0=512 expecting 512 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:19.074 node1=1024 expecting 1024 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:19.074 00:05:19.074 real 0m1.348s 00:05:19.074 user 0m0.569s 00:05:19.074 sys 0m0.742s 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.074 06:34:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.074 ************************************ 00:05:19.074 END TEST custom_alloc 00:05:19.074 ************************************ 00:05:19.332 06:34:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:19.332 06:34:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.332 06:34:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.332 06:34:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.332 ************************************ 00:05:19.332 START TEST no_shrink_alloc 00:05:19.332 ************************************ 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.332 06:34:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.268 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:20.268 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:20.268 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:20.268 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:20.268 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:20.268 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:20.268 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:20.268 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:20.268 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:20.268 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:20.268 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:20.268 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:20.268 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:20.268 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:20.268 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:20.268 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:20.268 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:20.532 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:20.532 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.532 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43329324 kB' 'MemAvailable: 46836676 kB' 'Buffers: 2704 kB' 'Cached: 12737996 kB' 'SwapCached: 0 kB' 'Active: 9743560 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348968 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512716 kB' 'Mapped: 178784 kB' 'Shmem: 8839512 kB' 'KReclaimable: 200008 kB' 'Slab: 573104 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373096 kB' 'KernelStack: 12864 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.533 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43329072 kB' 'MemAvailable: 46836424 kB' 'Buffers: 2704 kB' 'Cached: 12738000 kB' 'SwapCached: 0 kB' 'Active: 9743576 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348984 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512660 kB' 'Mapped: 178720 kB' 'Shmem: 8839516 kB' 'KReclaimable: 200008 kB' 'Slab: 573080 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373072 kB' 'KernelStack: 12880 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.534 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43329464 kB' 'MemAvailable: 46836816 kB' 'Buffers: 2704 kB' 'Cached: 12738016 kB' 'SwapCached: 0 kB' 'Active: 9743660 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512704 kB' 'Mapped: 178720 kB' 'Shmem: 8839532 kB' 'KReclaimable: 200008 kB' 'Slab: 573080 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373072 kB' 'KernelStack: 12864 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.535 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.536 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.537 nr_hugepages=1024 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.537 resv_hugepages=0 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.537 surplus_hugepages=0 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.537 anon_hugepages=0 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43329212 kB' 'MemAvailable: 46836564 kB' 'Buffers: 2704 kB' 'Cached: 12738040 kB' 'SwapCached: 0 kB' 'Active: 9743632 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349040 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512704 kB' 'Mapped: 178720 kB' 'Shmem: 8839556 kB' 'KReclaimable: 200008 kB' 'Slab: 573072 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373064 kB' 'KernelStack: 12896 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.538 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19665116 kB' 'MemUsed: 13211824 kB' 'SwapCached: 0 kB' 'Active: 6688728 kB' 'Inactive: 3265212 kB' 'Active(anon): 6500156 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667436 kB' 'Mapped: 71084 kB' 'AnonPages: 289664 kB' 'Shmem: 6213652 kB' 'KernelStack: 7176 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313544 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.797 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.798 node0=1024 expecting 1024 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.798 06:34:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.730 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:21.730 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:21.730 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:21.730 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:21.730 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:21.730 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:21.730 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:21.730 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:21.730 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:21.730 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:21.730 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:21.730 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:21.730 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:21.730 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:21.730 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:21.730 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:21.730 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:21.730 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:21.994 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:21.994 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.994 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43334156 kB' 'MemAvailable: 46841508 kB' 'Buffers: 2704 kB' 'Cached: 12738104 kB' 'SwapCached: 0 kB' 'Active: 9743556 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348964 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512516 kB' 'Mapped: 178724 kB' 'Shmem: 8839620 kB' 'KReclaimable: 200008 kB' 'Slab: 572956 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372948 kB' 'KernelStack: 12880 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.995 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43336436 kB' 'MemAvailable: 46843788 kB' 'Buffers: 2704 kB' 'Cached: 12738108 kB' 'SwapCached: 0 kB' 'Active: 9743868 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349276 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512872 kB' 'Mapped: 178728 kB' 'Shmem: 8839624 kB' 'KReclaimable: 200008 kB' 'Slab: 572956 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 372948 kB' 'KernelStack: 12944 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.996 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.997 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43336792 kB' 'MemAvailable: 46844144 kB' 'Buffers: 2704 kB' 'Cached: 12738108 kB' 'SwapCached: 0 kB' 'Active: 9743480 kB' 'Inactive: 3506596 kB' 'Active(anon): 9348888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512496 kB' 'Mapped: 178728 kB' 'Shmem: 8839624 kB' 'KReclaimable: 200008 kB' 'Slab: 573020 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373012 kB' 'KernelStack: 12960 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.998 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.999 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.000 nr_hugepages=1024 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.000 resv_hugepages=0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.000 surplus_hugepages=0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.000 anon_hugepages=0 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43336540 kB' 'MemAvailable: 46843892 kB' 'Buffers: 2704 kB' 'Cached: 12738148 kB' 'SwapCached: 0 kB' 'Active: 9743792 kB' 'Inactive: 3506596 kB' 'Active(anon): 9349200 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512780 kB' 'Mapped: 178728 kB' 'Shmem: 8839664 kB' 'KReclaimable: 200008 kB' 'Slab: 573020 kB' 'SReclaimable: 200008 kB' 'SUnreclaim: 373012 kB' 'KernelStack: 12960 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10489732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.000 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.001 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19667524 kB' 'MemUsed: 13209416 kB' 'SwapCached: 0 kB' 'Active: 6688820 kB' 'Inactive: 3265212 kB' 'Active(anon): 6500248 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9667436 kB' 'Mapped: 71092 kB' 'AnonPages: 289768 kB' 'Shmem: 6213652 kB' 'KernelStack: 7160 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115892 kB' 'Slab: 313476 kB' 'SReclaimable: 115892 kB' 'SUnreclaim: 197584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.002 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.003 node0=1024 expecting 1024 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.003 00:05:22.003 real 0m2.786s 00:05:22.003 user 0m1.135s 00:05:22.003 sys 0m1.563s 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.003 06:34:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.003 ************************************ 00:05:22.003 END TEST no_shrink_alloc 00:05:22.003 ************************************ 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.003 06:34:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.003 00:05:22.003 real 0m11.077s 00:05:22.003 user 0m4.266s 00:05:22.003 sys 0m5.689s 00:05:22.003 06:34:09 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.003 06:34:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.003 ************************************ 00:05:22.003 END TEST hugepages 00:05:22.003 ************************************ 00:05:22.003 06:34:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:22.003 06:34:09 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.003 06:34:09 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.003 06:34:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.003 ************************************ 00:05:22.003 START TEST driver 00:05:22.003 ************************************ 00:05:22.003 06:34:09 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:22.261 * Looking for test storage... 00:05:22.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:22.261 06:34:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:22.261 06:34:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.261 06:34:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.805 06:34:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:24.805 06:34:12 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.805 06:34:12 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.805 06:34:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:24.805 ************************************ 00:05:24.805 START TEST guess_driver 00:05:24.805 ************************************ 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:24.805 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:24.805 Looking for driver=vfio-pci 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.805 06:34:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.741 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.001 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:26.001 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:26.001 06:34:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.936 06:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.937 06:34:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.470 00:05:29.470 real 0m4.781s 00:05:29.470 user 0m1.077s 00:05:29.470 sys 0m1.778s 00:05:29.470 06:34:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.470 06:34:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:29.470 ************************************ 00:05:29.470 END TEST guess_driver 00:05:29.470 ************************************ 00:05:29.470 00:05:29.470 real 0m7.311s 00:05:29.470 user 0m1.634s 00:05:29.470 sys 0m2.772s 00:05:29.470 06:34:16 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.470 06:34:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:29.470 ************************************ 00:05:29.470 END TEST driver 00:05:29.470 ************************************ 00:05:29.470 06:34:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.470 06:34:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.470 06:34:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.470 06:34:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.470 ************************************ 00:05:29.470 START TEST devices 00:05:29.470 ************************************ 00:05:29.470 06:34:16 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.470 * Looking for test storage... 00:05:29.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.470 06:34:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:29.470 06:34:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:29.470 06:34:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.470 06:34:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.849 06:34:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:30.849 06:34:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:30.850 06:34:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:30.850 06:34:18 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:30.850 No valid GPT data, bailing 00:05:30.850 06:34:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.850 06:34:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:30.850 06:34:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:30.850 06:34:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:30.850 06:34:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:30.850 06:34:18 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:30.850 06:34:18 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:30.850 06:34:18 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.850 06:34:18 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.850 06:34:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:30.850 ************************************ 00:05:30.850 START TEST nvme_mount 00:05:30.850 ************************************ 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.850 06:34:18 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:32.231 Creating new GPT entries in memory. 00:05:32.231 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:32.231 other utilities. 00:05:32.231 06:34:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:32.231 06:34:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.231 06:34:19 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.231 06:34:19 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.231 06:34:19 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:33.171 Creating new GPT entries in memory. 00:05:33.171 The operation has completed successfully. 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 499248 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.171 06:34:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.108 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.368 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.368 06:34:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.628 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:34.628 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:34.628 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.628 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.628 06:34:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.567 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.827 06:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.761 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.019 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.019 00:05:37.019 real 0m6.117s 00:05:37.019 user 0m1.371s 00:05:37.019 sys 0m2.327s 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.019 06:34:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:37.019 ************************************ 00:05:37.019 END TEST nvme_mount 00:05:37.019 ************************************ 00:05:37.019 06:34:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:37.019 06:34:24 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.019 06:34:24 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.019 06:34:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:37.019 ************************************ 00:05:37.019 START TEST dm_mount 00:05:37.019 ************************************ 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:37.019 06:34:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:38.396 Creating new GPT entries in memory. 00:05:38.396 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:38.396 other utilities. 00:05:38.396 06:34:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:38.396 06:34:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.396 06:34:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.396 06:34:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.396 06:34:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:39.336 Creating new GPT entries in memory. 00:05:39.336 The operation has completed successfully. 00:05:39.336 06:34:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:39.336 06:34:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.336 06:34:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.336 06:34:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.336 06:34:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:40.275 The operation has completed successfully. 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 501508 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.275 06:34:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.212 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.471 06:34:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:42.897 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:42.897 00:05:42.897 real 0m5.797s 00:05:42.897 user 0m0.961s 00:05:42.897 sys 0m1.703s 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.897 06:34:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:42.897 ************************************ 00:05:42.897 END TEST dm_mount 00:05:42.897 ************************************ 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.897 06:34:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.156 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:43.156 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:43.156 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.156 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.156 06:34:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:43.156 00:05:43.156 real 0m13.754s 00:05:43.156 user 0m2.966s 00:05:43.156 sys 0m4.999s 00:05:43.156 06:34:30 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.156 06:34:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:43.156 ************************************ 00:05:43.156 END TEST devices 00:05:43.156 ************************************ 00:05:43.156 00:05:43.156 real 0m42.734s 00:05:43.156 user 0m12.105s 00:05:43.156 sys 0m18.793s 00:05:43.156 06:34:30 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.156 06:34:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:43.156 ************************************ 00:05:43.156 END TEST setup.sh 00:05:43.156 ************************************ 00:05:43.156 06:34:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:44.537 Hugepages 00:05:44.537 node hugesize free / total 00:05:44.537 node0 1048576kB 0 / 0 00:05:44.537 node0 2048kB 2048 / 2048 00:05:44.537 node1 1048576kB 0 / 0 00:05:44.537 node1 2048kB 0 / 0 00:05:44.537 00:05:44.537 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.537 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:44.537 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:44.537 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:44.537 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:44.537 06:34:31 -- spdk/autotest.sh@130 -- # uname -s 00:05:44.538 06:34:31 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:44.538 06:34:31 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:44.538 06:34:31 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.912 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.912 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.912 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:46.848 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.848 06:34:34 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:47.786 06:34:35 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:47.786 06:34:35 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:47.786 06:34:35 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.786 06:34:35 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:47.786 06:34:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:47.786 06:34:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:47.786 06:34:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.786 06:34:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:47.786 06:34:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:47.786 06:34:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:47.786 06:34:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:47.786 06:34:35 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:49.160 Waiting for block devices as requested 00:05:49.160 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:49.160 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:49.160 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:49.418 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:49.418 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:49.418 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:49.418 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:49.677 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:49.677 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:49.677 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:49.677 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:49.935 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:49.935 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:49.935 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:49.935 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:50.193 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:50.193 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:50.193 06:34:37 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:50.193 06:34:37 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:50.193 06:34:37 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:50.193 06:34:37 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:50.193 06:34:37 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:50.193 06:34:37 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:50.193 06:34:37 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:50.193 06:34:37 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:50.193 06:34:37 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:50.193 06:34:37 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:50.193 06:34:37 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:50.193 06:34:37 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:50.193 06:34:37 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:50.193 06:34:37 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:50.193 06:34:37 -- common/autotest_common.sh@1553 -- # continue 00:05:50.451 06:34:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:50.451 06:34:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.451 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.451 06:34:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:50.451 06:34:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.451 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.451 06:34:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.385 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.385 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:51.385 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.644 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:52.579 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:52.579 06:34:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:52.579 06:34:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.579 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:52.579 06:34:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:52.579 06:34:40 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:52.579 06:34:40 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:52.579 06:34:40 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:52.579 06:34:40 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:52.579 06:34:40 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:52.579 06:34:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:52.579 06:34:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:52.579 06:34:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.579 06:34:40 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:52.579 06:34:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:52.579 06:34:40 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:52.579 06:34:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:52.579 06:34:40 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:52.579 06:34:40 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:52.579 06:34:40 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:52.579 06:34:40 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:52.579 06:34:40 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:52.579 06:34:40 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:52.579 06:34:40 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:52.579 06:34:40 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=506807 00:05:52.579 06:34:40 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.579 06:34:40 -- common/autotest_common.sh@1594 -- # waitforlisten 506807 00:05:52.579 06:34:40 -- common/autotest_common.sh@827 -- # '[' -z 506807 ']' 00:05:52.579 06:34:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.579 06:34:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.579 06:34:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.579 06:34:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.579 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:52.837 [2024-07-15 06:34:40.224389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:52.837 [2024-07-15 06:34:40.224487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506807 ] 00:05:52.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.837 [2024-07-15 06:34:40.282706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.837 [2024-07-15 06:34:40.370159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.095 06:34:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.095 06:34:40 -- common/autotest_common.sh@860 -- # return 0 00:05:53.095 06:34:40 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:53.095 06:34:40 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:53.095 06:34:40 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:56.375 nvme0n1 00:05:56.375 06:34:43 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:56.375 [2024-07-15 06:34:43.940630] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:56.375 [2024-07-15 06:34:43.940678] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:56.375 request: 00:05:56.375 { 00:05:56.375 "nvme_ctrlr_name": "nvme0", 00:05:56.375 "password": "test", 00:05:56.375 "method": "bdev_nvme_opal_revert", 00:05:56.375 "req_id": 1 00:05:56.375 } 00:05:56.375 Got JSON-RPC error response 00:05:56.375 response: 00:05:56.375 { 00:05:56.375 "code": -32603, 00:05:56.375 "message": "Internal error" 00:05:56.375 } 00:05:56.375 06:34:43 -- common/autotest_common.sh@1600 -- # true 00:05:56.375 06:34:43 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:56.375 06:34:43 -- common/autotest_common.sh@1604 -- # killprocess 506807 00:05:56.375 06:34:43 -- common/autotest_common.sh@946 -- # '[' -z 506807 ']' 00:05:56.375 06:34:43 -- common/autotest_common.sh@950 -- # kill -0 506807 00:05:56.375 06:34:43 -- common/autotest_common.sh@951 -- # uname 00:05:56.375 06:34:43 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.375 06:34:43 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 506807 00:05:56.375 06:34:43 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.375 06:34:43 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.375 06:34:43 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 506807' 00:05:56.375 killing process with pid 506807 00:05:56.375 06:34:43 -- common/autotest_common.sh@965 -- # kill 506807 00:05:56.375 06:34:43 -- common/autotest_common.sh@970 -- # wait 506807 00:05:58.273 06:34:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:58.273 06:34:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:58.273 06:34:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:58.273 06:34:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:58.273 06:34:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:58.273 06:34:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.273 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.273 06:34:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:58.273 06:34:45 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.273 06:34:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.273 06:34:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.273 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.273 ************************************ 00:05:58.273 START TEST env 00:05:58.273 ************************************ 00:05:58.273 06:34:45 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.273 * Looking for test storage... 00:05:58.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:58.273 06:34:45 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.273 06:34:45 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.273 06:34:45 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.273 06:34:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.273 ************************************ 00:05:58.273 START TEST env_memory 00:05:58.273 ************************************ 00:05:58.273 06:34:45 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.273 00:05:58.273 00:05:58.273 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.273 http://cunit.sourceforge.net/ 00:05:58.273 00:05:58.273 00:05:58.273 Suite: memory 00:05:58.567 Test: alloc and free memory map ...[2024-07-15 06:34:45.911122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.567 passed 00:05:58.567 Test: mem map translation ...[2024-07-15 06:34:45.930711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.567 [2024-07-15 06:34:45.930734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.567 [2024-07-15 06:34:45.930775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.567 [2024-07-15 06:34:45.930787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.567 passed 00:05:58.567 Test: mem map registration ...[2024-07-15 06:34:45.971112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:58.567 [2024-07-15 06:34:45.971133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:58.567 passed 00:05:58.567 Test: mem map adjacent registrations ...passed 00:05:58.567 00:05:58.567 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.567 suites 1 1 n/a 0 0 00:05:58.567 tests 4 4 4 0 0 00:05:58.567 asserts 152 152 152 0 n/a 00:05:58.567 00:05:58.567 Elapsed time = 0.139 seconds 00:05:58.567 00:05:58.567 real 0m0.148s 00:05:58.567 user 0m0.141s 00:05:58.567 sys 0m0.007s 00:05:58.567 06:34:46 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.567 06:34:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:58.567 ************************************ 00:05:58.567 END TEST env_memory 00:05:58.567 ************************************ 00:05:58.567 06:34:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.567 06:34:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.567 06:34:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.567 06:34:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.567 ************************************ 00:05:58.567 START TEST env_vtophys 00:05:58.567 ************************************ 00:05:58.567 06:34:46 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.567 EAL: lib.eal log level changed from notice to debug 00:05:58.567 EAL: Detected lcore 0 as core 0 on socket 0 00:05:58.567 EAL: Detected lcore 1 as core 1 on socket 0 00:05:58.567 EAL: Detected lcore 2 as core 2 on socket 0 00:05:58.567 EAL: Detected lcore 3 as core 3 on socket 0 00:05:58.567 EAL: Detected lcore 4 as core 4 on socket 0 00:05:58.567 EAL: Detected lcore 5 as core 5 on socket 0 00:05:58.567 EAL: Detected lcore 6 as core 8 on socket 0 00:05:58.567 EAL: Detected lcore 7 as core 9 on socket 0 00:05:58.567 EAL: Detected lcore 8 as core 10 on socket 0 00:05:58.567 EAL: Detected lcore 9 as core 11 on socket 0 00:05:58.567 EAL: Detected lcore 10 as core 12 on socket 0 00:05:58.567 EAL: Detected lcore 11 as core 13 on socket 0 00:05:58.567 EAL: Detected lcore 12 as core 0 on socket 1 00:05:58.567 EAL: Detected lcore 13 as core 1 on socket 1 00:05:58.567 EAL: Detected lcore 14 as core 2 on socket 1 00:05:58.567 EAL: Detected lcore 15 as core 3 on socket 1 00:05:58.567 EAL: Detected lcore 16 as core 4 on socket 1 00:05:58.567 EAL: Detected lcore 17 as core 5 on socket 1 00:05:58.567 EAL: Detected lcore 18 as core 8 on socket 1 00:05:58.567 EAL: Detected lcore 19 as core 9 on socket 1 00:05:58.567 EAL: Detected lcore 20 as core 10 on socket 1 00:05:58.567 EAL: Detected lcore 21 as core 11 on socket 1 00:05:58.567 EAL: Detected lcore 22 as core 12 on socket 1 00:05:58.567 EAL: Detected lcore 23 as core 13 on socket 1 00:05:58.567 EAL: Detected lcore 24 as core 0 on socket 0 00:05:58.567 EAL: Detected lcore 25 as core 1 on socket 0 00:05:58.567 EAL: Detected lcore 26 as core 2 on socket 0 00:05:58.567 EAL: Detected lcore 27 as core 3 on socket 0 00:05:58.567 EAL: Detected lcore 28 as core 4 on socket 0 00:05:58.567 EAL: Detected lcore 29 as core 5 on socket 0 00:05:58.567 EAL: Detected lcore 30 as core 8 on socket 0 00:05:58.567 EAL: Detected lcore 31 as core 9 on socket 0 00:05:58.567 EAL: Detected lcore 32 as core 10 on socket 0 00:05:58.567 EAL: Detected lcore 33 as core 11 on socket 0 00:05:58.567 EAL: Detected lcore 34 as core 12 on socket 0 00:05:58.567 EAL: Detected lcore 35 as core 13 on socket 0 00:05:58.567 EAL: Detected lcore 36 as core 0 on socket 1 00:05:58.567 EAL: Detected lcore 37 as core 1 on socket 1 00:05:58.567 EAL: Detected lcore 38 as core 2 on socket 1 00:05:58.567 EAL: Detected lcore 39 as core 3 on socket 1 00:05:58.567 EAL: Detected lcore 40 as core 4 on socket 1 00:05:58.567 EAL: Detected lcore 41 as core 5 on socket 1 00:05:58.567 EAL: Detected lcore 42 as core 8 on socket 1 00:05:58.567 EAL: Detected lcore 43 as core 9 on socket 1 00:05:58.567 EAL: Detected lcore 44 as core 10 on socket 1 00:05:58.567 EAL: Detected lcore 45 as core 11 on socket 1 00:05:58.567 EAL: Detected lcore 46 as core 12 on socket 1 00:05:58.567 EAL: Detected lcore 47 as core 13 on socket 1 00:05:58.567 EAL: Maximum logical cores by configuration: 128 00:05:58.567 EAL: Detected CPU lcores: 48 00:05:58.567 EAL: Detected NUMA nodes: 2 00:05:58.567 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:58.567 EAL: Detected shared linkage of DPDK 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:58.567 EAL: Registered [vdev] bus. 00:05:58.567 EAL: bus.vdev log level changed from disabled to notice 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:58.567 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:58.567 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:58.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:58.567 EAL: No shared files mode enabled, IPC will be disabled 00:05:58.567 EAL: No shared files mode enabled, IPC is disabled 00:05:58.567 EAL: Bus pci wants IOVA as 'DC' 00:05:58.567 EAL: Bus vdev wants IOVA as 'DC' 00:05:58.567 EAL: Buses did not request a specific IOVA mode. 00:05:58.567 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:58.567 EAL: Selected IOVA mode 'VA' 00:05:58.567 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.567 EAL: Probing VFIO support... 00:05:58.567 EAL: IOMMU type 1 (Type 1) is supported 00:05:58.567 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:58.567 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:58.567 EAL: VFIO support initialized 00:05:58.567 EAL: Ask a virtual area of 0x2e000 bytes 00:05:58.567 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:58.567 EAL: Setting up physically contiguous memory... 00:05:58.567 EAL: Setting maximum number of open files to 524288 00:05:58.567 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:58.568 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:58.568 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:58.568 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:58.568 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.568 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:58.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.568 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.568 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:58.568 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:58.568 EAL: Hugepages will be freed exactly as allocated. 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: TSC frequency is ~2700000 KHz 00:05:58.568 EAL: Main lcore 0 is ready (tid=7f1997aa5a00;cpuset=[0]) 00:05:58.568 EAL: Trying to obtain current memory policy. 00:05:58.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.568 EAL: Restoring previous memory policy: 0 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was expanded by 2MB 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:58.568 EAL: Mem event callback 'spdk:(nil)' registered 00:05:58.568 00:05:58.568 00:05:58.568 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.568 http://cunit.sourceforge.net/ 00:05:58.568 00:05:58.568 00:05:58.568 Suite: components_suite 00:05:58.568 Test: vtophys_malloc_test ...passed 00:05:58.568 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:58.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.568 EAL: Restoring previous memory policy: 4 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was expanded by 4MB 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was shrunk by 4MB 00:05:58.568 EAL: Trying to obtain current memory policy. 00:05:58.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.568 EAL: Restoring previous memory policy: 4 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was expanded by 6MB 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was shrunk by 6MB 00:05:58.568 EAL: Trying to obtain current memory policy. 00:05:58.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.568 EAL: Restoring previous memory policy: 4 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was expanded by 10MB 00:05:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.568 EAL: request: mp_malloc_sync 00:05:58.568 EAL: No shared files mode enabled, IPC is disabled 00:05:58.568 EAL: Heap on socket 0 was shrunk by 10MB 00:05:58.568 EAL: Trying to obtain current memory policy. 00:05:58.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.827 EAL: Restoring previous memory policy: 4 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.827 EAL: Trying to obtain current memory policy. 00:05:58.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.827 EAL: Restoring previous memory policy: 4 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.827 EAL: Trying to obtain current memory policy. 00:05:58.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.827 EAL: Restoring previous memory policy: 4 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.827 EAL: Trying to obtain current memory policy. 00:05:58.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.827 EAL: Restoring previous memory policy: 4 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.827 EAL: Trying to obtain current memory policy. 00:05:58.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.827 EAL: Restoring previous memory policy: 4 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.827 EAL: request: mp_malloc_sync 00:05:58.827 EAL: No shared files mode enabled, IPC is disabled 00:05:58.827 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.084 EAL: request: mp_malloc_sync 00:05:59.084 EAL: No shared files mode enabled, IPC is disabled 00:05:59.084 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.084 EAL: Trying to obtain current memory policy. 00:05:59.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.084 EAL: Restoring previous memory policy: 4 00:05:59.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.084 EAL: request: mp_malloc_sync 00:05:59.084 EAL: No shared files mode enabled, IPC is disabled 00:05:59.084 EAL: Heap on socket 0 was expanded by 514MB 00:05:59.343 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.343 EAL: request: mp_malloc_sync 00:05:59.343 EAL: No shared files mode enabled, IPC is disabled 00:05:59.343 EAL: Heap on socket 0 was shrunk by 514MB 00:05:59.343 EAL: Trying to obtain current memory policy. 00:05:59.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.600 EAL: Restoring previous memory policy: 4 00:05:59.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.600 EAL: request: mp_malloc_sync 00:05:59.600 EAL: No shared files mode enabled, IPC is disabled 00:05:59.600 EAL: Heap on socket 0 was expanded by 1026MB 00:05:59.858 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.117 EAL: request: mp_malloc_sync 00:06:00.117 EAL: No shared files mode enabled, IPC is disabled 00:06:00.117 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.117 passed 00:06:00.117 00:06:00.117 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.117 suites 1 1 n/a 0 0 00:06:00.117 tests 2 2 2 0 0 00:06:00.117 asserts 497 497 497 0 n/a 00:06:00.117 00:06:00.117 Elapsed time = 1.387 seconds 00:06:00.117 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.117 EAL: request: mp_malloc_sync 00:06:00.117 EAL: No shared files mode enabled, IPC is disabled 00:06:00.117 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.117 EAL: No shared files mode enabled, IPC is disabled 00:06:00.117 EAL: No shared files mode enabled, IPC is disabled 00:06:00.117 EAL: No shared files mode enabled, IPC is disabled 00:06:00.117 00:06:00.117 real 0m1.501s 00:06:00.117 user 0m0.867s 00:06:00.117 sys 0m0.605s 00:06:00.117 06:34:47 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.117 06:34:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 ************************************ 00:06:00.117 END TEST env_vtophys 00:06:00.117 ************************************ 00:06:00.117 06:34:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.117 06:34:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.117 06:34:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.117 06:34:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 ************************************ 00:06:00.117 START TEST env_pci 00:06:00.117 ************************************ 00:06:00.117 06:34:47 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.117 00:06:00.117 00:06:00.117 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.117 http://cunit.sourceforge.net/ 00:06:00.117 00:06:00.117 00:06:00.117 Suite: pci 00:06:00.117 Test: pci_hook ...[2024-07-15 06:34:47.635562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 507692 has claimed it 00:06:00.117 EAL: Cannot find device (10000:00:01.0) 00:06:00.117 EAL: Failed to attach device on primary process 00:06:00.117 passed 00:06:00.117 00:06:00.117 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.117 suites 1 1 n/a 0 0 00:06:00.117 tests 1 1 1 0 0 00:06:00.117 asserts 25 25 25 0 n/a 00:06:00.117 00:06:00.117 Elapsed time = 0.021 seconds 00:06:00.117 00:06:00.117 real 0m0.035s 00:06:00.117 user 0m0.011s 00:06:00.117 sys 0m0.024s 00:06:00.117 06:34:47 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.117 06:34:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 ************************************ 00:06:00.117 END TEST env_pci 00:06:00.117 ************************************ 00:06:00.117 06:34:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.117 06:34:47 env -- env/env.sh@15 -- # uname 00:06:00.117 06:34:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.117 06:34:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.117 06:34:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.117 06:34:47 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:00.117 06:34:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.117 06:34:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 ************************************ 00:06:00.117 START TEST env_dpdk_post_init 00:06:00.117 ************************************ 00:06:00.117 06:34:47 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.376 EAL: Detected CPU lcores: 48 00:06:00.376 EAL: Detected NUMA nodes: 2 00:06:00.376 EAL: Detected shared linkage of DPDK 00:06:00.376 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.376 EAL: Selected IOVA mode 'VA' 00:06:00.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.376 EAL: VFIO support initialized 00:06:00.376 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.376 EAL: Using IOMMU type 1 (Type 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:00.376 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:00.635 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:00.635 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:01.203 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:04.485 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:04.485 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:04.485 Starting DPDK initialization... 00:06:04.485 Starting SPDK post initialization... 00:06:04.485 SPDK NVMe probe 00:06:04.485 Attaching to 0000:88:00.0 00:06:04.485 Attached to 0000:88:00.0 00:06:04.485 Cleaning up... 00:06:04.485 00:06:04.485 real 0m4.382s 00:06:04.485 user 0m3.245s 00:06:04.485 sys 0m0.192s 00:06:04.485 06:34:52 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.485 06:34:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.485 ************************************ 00:06:04.485 END TEST env_dpdk_post_init 00:06:04.485 ************************************ 00:06:04.742 06:34:52 env -- env/env.sh@26 -- # uname 00:06:04.742 06:34:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:04.742 06:34:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.742 06:34:52 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.742 06:34:52 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.742 06:34:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.742 ************************************ 00:06:04.742 START TEST env_mem_callbacks 00:06:04.742 ************************************ 00:06:04.742 06:34:52 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.742 EAL: Detected CPU lcores: 48 00:06:04.742 EAL: Detected NUMA nodes: 2 00:06:04.742 EAL: Detected shared linkage of DPDK 00:06:04.742 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.742 EAL: Selected IOVA mode 'VA' 00:06:04.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.742 EAL: VFIO support initialized 00:06:04.742 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.742 00:06:04.742 00:06:04.742 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.742 http://cunit.sourceforge.net/ 00:06:04.742 00:06:04.742 00:06:04.742 Suite: memory 00:06:04.742 Test: test ... 00:06:04.742 register 0x200000200000 2097152 00:06:04.743 malloc 3145728 00:06:04.743 register 0x200000400000 4194304 00:06:04.743 buf 0x200000500000 len 3145728 PASSED 00:06:04.743 malloc 64 00:06:04.743 buf 0x2000004fff40 len 64 PASSED 00:06:04.743 malloc 4194304 00:06:04.743 register 0x200000800000 6291456 00:06:04.743 buf 0x200000a00000 len 4194304 PASSED 00:06:04.743 free 0x200000500000 3145728 00:06:04.743 free 0x2000004fff40 64 00:06:04.743 unregister 0x200000400000 4194304 PASSED 00:06:04.743 free 0x200000a00000 4194304 00:06:04.743 unregister 0x200000800000 6291456 PASSED 00:06:04.743 malloc 8388608 00:06:04.743 register 0x200000400000 10485760 00:06:04.743 buf 0x200000600000 len 8388608 PASSED 00:06:04.743 free 0x200000600000 8388608 00:06:04.743 unregister 0x200000400000 10485760 PASSED 00:06:04.743 passed 00:06:04.743 00:06:04.743 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.743 suites 1 1 n/a 0 0 00:06:04.743 tests 1 1 1 0 0 00:06:04.743 asserts 15 15 15 0 n/a 00:06:04.743 00:06:04.743 Elapsed time = 0.005 seconds 00:06:04.743 00:06:04.743 real 0m0.050s 00:06:04.743 user 0m0.018s 00:06:04.743 sys 0m0.032s 00:06:04.743 06:34:52 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.743 06:34:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:04.743 ************************************ 00:06:04.743 END TEST env_mem_callbacks 00:06:04.743 ************************************ 00:06:04.743 00:06:04.743 real 0m6.412s 00:06:04.743 user 0m4.403s 00:06:04.743 sys 0m1.053s 00:06:04.743 06:34:52 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.743 06:34:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.743 ************************************ 00:06:04.743 END TEST env 00:06:04.743 ************************************ 00:06:04.743 06:34:52 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.743 06:34:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.743 06:34:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.743 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:04.743 ************************************ 00:06:04.743 START TEST rpc 00:06:04.743 ************************************ 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.743 * Looking for test storage... 00:06:04.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.743 06:34:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=508383 00:06:04.743 06:34:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:04.743 06:34:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.743 06:34:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 508383 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@827 -- # '[' -z 508383 ']' 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.743 06:34:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.001 [2024-07-15 06:34:52.363148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:05.001 [2024-07-15 06:34:52.363246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508383 ] 00:06:05.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.001 [2024-07-15 06:34:52.439349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.001 [2024-07-15 06:34:52.537496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.001 [2024-07-15 06:34:52.537573] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 508383' to capture a snapshot of events at runtime. 00:06:05.001 [2024-07-15 06:34:52.537613] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.001 [2024-07-15 06:34:52.537635] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.001 [2024-07-15 06:34:52.537653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid508383 for offline analysis/debug. 00:06:05.001 [2024-07-15 06:34:52.537695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.259 06:34:52 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.259 06:34:52 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:05.259 06:34:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.259 06:34:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.259 06:34:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.259 06:34:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.259 06:34:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.259 06:34:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.259 06:34:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 ************************************ 00:06:05.259 START TEST rpc_integrity 00:06:05.259 ************************************ 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:05.259 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.259 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.259 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.259 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.259 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.259 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.518 { 00:06:05.518 "name": "Malloc0", 00:06:05.518 "aliases": [ 00:06:05.518 "0073fa72-c963-46f1-b859-a09e3096e427" 00:06:05.518 ], 00:06:05.518 "product_name": "Malloc disk", 00:06:05.518 "block_size": 512, 00:06:05.518 "num_blocks": 16384, 00:06:05.518 "uuid": "0073fa72-c963-46f1-b859-a09e3096e427", 00:06:05.518 "assigned_rate_limits": { 00:06:05.518 "rw_ios_per_sec": 0, 00:06:05.518 "rw_mbytes_per_sec": 0, 00:06:05.518 "r_mbytes_per_sec": 0, 00:06:05.518 "w_mbytes_per_sec": 0 00:06:05.518 }, 00:06:05.518 "claimed": false, 00:06:05.518 "zoned": false, 00:06:05.518 "supported_io_types": { 00:06:05.518 "read": true, 00:06:05.518 "write": true, 00:06:05.518 "unmap": true, 00:06:05.518 "write_zeroes": true, 00:06:05.518 "flush": true, 00:06:05.518 "reset": true, 00:06:05.518 "compare": false, 00:06:05.518 "compare_and_write": false, 00:06:05.518 "abort": true, 00:06:05.518 "nvme_admin": false, 00:06:05.518 "nvme_io": false 00:06:05.518 }, 00:06:05.518 "memory_domains": [ 00:06:05.518 { 00:06:05.518 "dma_device_id": "system", 00:06:05.518 "dma_device_type": 1 00:06:05.518 }, 00:06:05.518 { 00:06:05.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.518 "dma_device_type": 2 00:06:05.518 } 00:06:05.518 ], 00:06:05.518 "driver_specific": {} 00:06:05.518 } 00:06:05.518 ]' 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 [2024-07-15 06:34:52.920386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.518 [2024-07-15 06:34:52.920440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.518 [2024-07-15 06:34:52.920460] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12e5d60 00:06:05.518 [2024-07-15 06:34:52.920472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.518 [2024-07-15 06:34:52.921740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.518 [2024-07-15 06:34:52.921761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.518 Passthru0 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.518 { 00:06:05.518 "name": "Malloc0", 00:06:05.518 "aliases": [ 00:06:05.518 "0073fa72-c963-46f1-b859-a09e3096e427" 00:06:05.518 ], 00:06:05.518 "product_name": "Malloc disk", 00:06:05.518 "block_size": 512, 00:06:05.518 "num_blocks": 16384, 00:06:05.518 "uuid": "0073fa72-c963-46f1-b859-a09e3096e427", 00:06:05.518 "assigned_rate_limits": { 00:06:05.518 "rw_ios_per_sec": 0, 00:06:05.518 "rw_mbytes_per_sec": 0, 00:06:05.518 "r_mbytes_per_sec": 0, 00:06:05.518 "w_mbytes_per_sec": 0 00:06:05.518 }, 00:06:05.518 "claimed": true, 00:06:05.518 "claim_type": "exclusive_write", 00:06:05.518 "zoned": false, 00:06:05.518 "supported_io_types": { 00:06:05.518 "read": true, 00:06:05.518 "write": true, 00:06:05.518 "unmap": true, 00:06:05.518 "write_zeroes": true, 00:06:05.518 "flush": true, 00:06:05.518 "reset": true, 00:06:05.518 "compare": false, 00:06:05.518 "compare_and_write": false, 00:06:05.518 "abort": true, 00:06:05.518 "nvme_admin": false, 00:06:05.518 "nvme_io": false 00:06:05.518 }, 00:06:05.518 "memory_domains": [ 00:06:05.518 { 00:06:05.518 "dma_device_id": "system", 00:06:05.518 "dma_device_type": 1 00:06:05.518 }, 00:06:05.518 { 00:06:05.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.518 "dma_device_type": 2 00:06:05.518 } 00:06:05.518 ], 00:06:05.518 "driver_specific": {} 00:06:05.518 }, 00:06:05.518 { 00:06:05.518 "name": "Passthru0", 00:06:05.518 "aliases": [ 00:06:05.518 "49d9a168-d336-5074-831a-9d79de564854" 00:06:05.518 ], 00:06:05.518 "product_name": "passthru", 00:06:05.518 "block_size": 512, 00:06:05.518 "num_blocks": 16384, 00:06:05.518 "uuid": "49d9a168-d336-5074-831a-9d79de564854", 00:06:05.518 "assigned_rate_limits": { 00:06:05.518 "rw_ios_per_sec": 0, 00:06:05.518 "rw_mbytes_per_sec": 0, 00:06:05.518 "r_mbytes_per_sec": 0, 00:06:05.518 "w_mbytes_per_sec": 0 00:06:05.518 }, 00:06:05.518 "claimed": false, 00:06:05.518 "zoned": false, 00:06:05.518 "supported_io_types": { 00:06:05.518 "read": true, 00:06:05.518 "write": true, 00:06:05.518 "unmap": true, 00:06:05.518 "write_zeroes": true, 00:06:05.518 "flush": true, 00:06:05.518 "reset": true, 00:06:05.518 "compare": false, 00:06:05.518 "compare_and_write": false, 00:06:05.518 "abort": true, 00:06:05.518 "nvme_admin": false, 00:06:05.518 "nvme_io": false 00:06:05.518 }, 00:06:05.518 "memory_domains": [ 00:06:05.518 { 00:06:05.518 "dma_device_id": "system", 00:06:05.518 "dma_device_type": 1 00:06:05.518 }, 00:06:05.518 { 00:06:05.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.518 "dma_device_type": 2 00:06:05.518 } 00:06:05.518 ], 00:06:05.518 "driver_specific": { 00:06:05.518 "passthru": { 00:06:05.518 "name": "Passthru0", 00:06:05.518 "base_bdev_name": "Malloc0" 00:06:05.518 } 00:06:05.518 } 00:06:05.518 } 00:06:05.518 ]' 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 06:34:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.518 06:34:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.518 06:34:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.518 00:06:05.518 real 0m0.212s 00:06:05.518 user 0m0.137s 00:06:05.518 sys 0m0.017s 00:06:05.518 06:34:53 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.518 06:34:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 ************************************ 00:06:05.518 END TEST rpc_integrity 00:06:05.518 ************************************ 00:06:05.518 06:34:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.518 06:34:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.518 06:34:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.518 06:34:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 ************************************ 00:06:05.518 START TEST rpc_plugins 00:06:05.518 ************************************ 00:06:05.518 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:05.518 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.518 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.518 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.519 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.519 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.519 { 00:06:05.519 "name": "Malloc1", 00:06:05.519 "aliases": [ 00:06:05.519 "404f74a1-10bb-4a1a-a3c1-948bcfee9ff2" 00:06:05.519 ], 00:06:05.519 "product_name": "Malloc disk", 00:06:05.519 "block_size": 4096, 00:06:05.519 "num_blocks": 256, 00:06:05.519 "uuid": "404f74a1-10bb-4a1a-a3c1-948bcfee9ff2", 00:06:05.519 "assigned_rate_limits": { 00:06:05.519 "rw_ios_per_sec": 0, 00:06:05.519 "rw_mbytes_per_sec": 0, 00:06:05.519 "r_mbytes_per_sec": 0, 00:06:05.519 "w_mbytes_per_sec": 0 00:06:05.519 }, 00:06:05.519 "claimed": false, 00:06:05.519 "zoned": false, 00:06:05.519 "supported_io_types": { 00:06:05.519 "read": true, 00:06:05.519 "write": true, 00:06:05.519 "unmap": true, 00:06:05.519 "write_zeroes": true, 00:06:05.519 "flush": true, 00:06:05.519 "reset": true, 00:06:05.519 "compare": false, 00:06:05.519 "compare_and_write": false, 00:06:05.519 "abort": true, 00:06:05.519 "nvme_admin": false, 00:06:05.519 "nvme_io": false 00:06:05.519 }, 00:06:05.519 "memory_domains": [ 00:06:05.519 { 00:06:05.519 "dma_device_id": "system", 00:06:05.519 "dma_device_type": 1 00:06:05.519 }, 00:06:05.519 { 00:06:05.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.519 "dma_device_type": 2 00:06:05.519 } 00:06:05.519 ], 00:06:05.519 "driver_specific": {} 00:06:05.519 } 00:06:05.519 ]' 00:06:05.519 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:05.781 06:34:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:05.781 00:06:05.781 real 0m0.106s 00:06:05.781 user 0m0.067s 00:06:05.781 sys 0m0.010s 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.781 06:34:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.781 ************************************ 00:06:05.781 END TEST rpc_plugins 00:06:05.781 ************************************ 00:06:05.781 06:34:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:05.781 06:34:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.781 06:34:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.781 06:34:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.781 ************************************ 00:06:05.781 START TEST rpc_trace_cmd_test 00:06:05.781 ************************************ 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.781 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:05.782 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid508383", 00:06:05.782 "tpoint_group_mask": "0x8", 00:06:05.782 "iscsi_conn": { 00:06:05.782 "mask": "0x2", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "scsi": { 00:06:05.782 "mask": "0x4", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "bdev": { 00:06:05.782 "mask": "0x8", 00:06:05.782 "tpoint_mask": "0xffffffffffffffff" 00:06:05.782 }, 00:06:05.782 "nvmf_rdma": { 00:06:05.782 "mask": "0x10", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "nvmf_tcp": { 00:06:05.782 "mask": "0x20", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "ftl": { 00:06:05.782 "mask": "0x40", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "blobfs": { 00:06:05.782 "mask": "0x80", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "dsa": { 00:06:05.782 "mask": "0x200", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "thread": { 00:06:05.782 "mask": "0x400", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "nvme_pcie": { 00:06:05.782 "mask": "0x800", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "iaa": { 00:06:05.782 "mask": "0x1000", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "nvme_tcp": { 00:06:05.782 "mask": "0x2000", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "bdev_nvme": { 00:06:05.782 "mask": "0x4000", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 }, 00:06:05.782 "sock": { 00:06:05.782 "mask": "0x8000", 00:06:05.782 "tpoint_mask": "0x0" 00:06:05.782 } 00:06:05.782 }' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.782 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.045 06:34:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.045 00:06:06.045 real 0m0.181s 00:06:06.045 user 0m0.158s 00:06:06.045 sys 0m0.014s 00:06:06.045 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 ************************************ 00:06:06.045 END TEST rpc_trace_cmd_test 00:06:06.045 ************************************ 00:06:06.045 06:34:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.045 06:34:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.045 06:34:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.045 06:34:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.045 06:34:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.045 06:34:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 ************************************ 00:06:06.045 START TEST rpc_daemon_integrity 00:06:06.045 ************************************ 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.045 { 00:06:06.045 "name": "Malloc2", 00:06:06.045 "aliases": [ 00:06:06.045 "5ba7635f-66a5-4e47-93aa-3b7714fcd9e1" 00:06:06.045 ], 00:06:06.045 "product_name": "Malloc disk", 00:06:06.045 "block_size": 512, 00:06:06.045 "num_blocks": 16384, 00:06:06.045 "uuid": "5ba7635f-66a5-4e47-93aa-3b7714fcd9e1", 00:06:06.045 "assigned_rate_limits": { 00:06:06.045 "rw_ios_per_sec": 0, 00:06:06.045 "rw_mbytes_per_sec": 0, 00:06:06.045 "r_mbytes_per_sec": 0, 00:06:06.045 "w_mbytes_per_sec": 0 00:06:06.045 }, 00:06:06.045 "claimed": false, 00:06:06.045 "zoned": false, 00:06:06.045 "supported_io_types": { 00:06:06.045 "read": true, 00:06:06.045 "write": true, 00:06:06.045 "unmap": true, 00:06:06.045 "write_zeroes": true, 00:06:06.045 "flush": true, 00:06:06.045 "reset": true, 00:06:06.045 "compare": false, 00:06:06.045 "compare_and_write": false, 00:06:06.045 "abort": true, 00:06:06.045 "nvme_admin": false, 00:06:06.045 "nvme_io": false 00:06:06.045 }, 00:06:06.045 "memory_domains": [ 00:06:06.045 { 00:06:06.045 "dma_device_id": "system", 00:06:06.045 "dma_device_type": 1 00:06:06.045 }, 00:06:06.045 { 00:06:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.045 "dma_device_type": 2 00:06:06.045 } 00:06:06.045 ], 00:06:06.045 "driver_specific": {} 00:06:06.045 } 00:06:06.045 ]' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 [2024-07-15 06:34:53.546205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.045 [2024-07-15 06:34:53.546258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.045 [2024-07-15 06:34:53.546276] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1497420 00:06:06.045 [2024-07-15 06:34:53.546289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.045 [2024-07-15 06:34:53.547431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.045 [2024-07-15 06:34:53.547453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.045 Passthru0 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.045 { 00:06:06.045 "name": "Malloc2", 00:06:06.045 "aliases": [ 00:06:06.045 "5ba7635f-66a5-4e47-93aa-3b7714fcd9e1" 00:06:06.045 ], 00:06:06.045 "product_name": "Malloc disk", 00:06:06.045 "block_size": 512, 00:06:06.045 "num_blocks": 16384, 00:06:06.045 "uuid": "5ba7635f-66a5-4e47-93aa-3b7714fcd9e1", 00:06:06.045 "assigned_rate_limits": { 00:06:06.045 "rw_ios_per_sec": 0, 00:06:06.045 "rw_mbytes_per_sec": 0, 00:06:06.045 "r_mbytes_per_sec": 0, 00:06:06.045 "w_mbytes_per_sec": 0 00:06:06.045 }, 00:06:06.045 "claimed": true, 00:06:06.045 "claim_type": "exclusive_write", 00:06:06.045 "zoned": false, 00:06:06.045 "supported_io_types": { 00:06:06.045 "read": true, 00:06:06.045 "write": true, 00:06:06.045 "unmap": true, 00:06:06.045 "write_zeroes": true, 00:06:06.045 "flush": true, 00:06:06.045 "reset": true, 00:06:06.045 "compare": false, 00:06:06.045 "compare_and_write": false, 00:06:06.045 "abort": true, 00:06:06.045 "nvme_admin": false, 00:06:06.045 "nvme_io": false 00:06:06.045 }, 00:06:06.045 "memory_domains": [ 00:06:06.045 { 00:06:06.045 "dma_device_id": "system", 00:06:06.045 "dma_device_type": 1 00:06:06.045 }, 00:06:06.045 { 00:06:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.045 "dma_device_type": 2 00:06:06.045 } 00:06:06.045 ], 00:06:06.045 "driver_specific": {} 00:06:06.045 }, 00:06:06.045 { 00:06:06.045 "name": "Passthru0", 00:06:06.045 "aliases": [ 00:06:06.045 "a932d5ca-2a92-51af-9cd5-c77603da0a98" 00:06:06.045 ], 00:06:06.045 "product_name": "passthru", 00:06:06.045 "block_size": 512, 00:06:06.045 "num_blocks": 16384, 00:06:06.045 "uuid": "a932d5ca-2a92-51af-9cd5-c77603da0a98", 00:06:06.045 "assigned_rate_limits": { 00:06:06.045 "rw_ios_per_sec": 0, 00:06:06.045 "rw_mbytes_per_sec": 0, 00:06:06.045 "r_mbytes_per_sec": 0, 00:06:06.045 "w_mbytes_per_sec": 0 00:06:06.045 }, 00:06:06.045 "claimed": false, 00:06:06.045 "zoned": false, 00:06:06.045 "supported_io_types": { 00:06:06.045 "read": true, 00:06:06.045 "write": true, 00:06:06.045 "unmap": true, 00:06:06.045 "write_zeroes": true, 00:06:06.045 "flush": true, 00:06:06.045 "reset": true, 00:06:06.045 "compare": false, 00:06:06.045 "compare_and_write": false, 00:06:06.045 "abort": true, 00:06:06.045 "nvme_admin": false, 00:06:06.045 "nvme_io": false 00:06:06.045 }, 00:06:06.045 "memory_domains": [ 00:06:06.045 { 00:06:06.045 "dma_device_id": "system", 00:06:06.045 "dma_device_type": 1 00:06:06.045 }, 00:06:06.045 { 00:06:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.045 "dma_device_type": 2 00:06:06.045 } 00:06:06.045 ], 00:06:06.045 "driver_specific": { 00:06:06.045 "passthru": { 00:06:06.045 "name": "Passthru0", 00:06:06.045 "base_bdev_name": "Malloc2" 00:06:06.045 } 00:06:06.045 } 00:06:06.045 } 00:06:06.045 ]' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.045 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.303 06:34:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.303 00:06:06.303 real 0m0.209s 00:06:06.303 user 0m0.136s 00:06:06.303 sys 0m0.021s 00:06:06.303 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.303 06:34:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.303 ************************************ 00:06:06.303 END TEST rpc_daemon_integrity 00:06:06.303 ************************************ 00:06:06.303 06:34:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.303 06:34:53 rpc -- rpc/rpc.sh@84 -- # killprocess 508383 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@946 -- # '[' -z 508383 ']' 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@950 -- # kill -0 508383 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@951 -- # uname 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 508383 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 508383' 00:06:06.303 killing process with pid 508383 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@965 -- # kill 508383 00:06:06.303 06:34:53 rpc -- common/autotest_common.sh@970 -- # wait 508383 00:06:06.561 00:06:06.561 real 0m1.864s 00:06:06.561 user 0m2.391s 00:06:06.561 sys 0m0.611s 00:06:06.561 06:34:54 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.561 06:34:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 ************************************ 00:06:06.561 END TEST rpc 00:06:06.561 ************************************ 00:06:06.561 06:34:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.561 06:34:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.561 06:34:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.561 06:34:54 -- common/autotest_common.sh@10 -- # set +x 00:06:06.819 ************************************ 00:06:06.819 START TEST skip_rpc 00:06:06.819 ************************************ 00:06:06.819 06:34:54 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.819 * Looking for test storage... 00:06:06.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.819 06:34:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.819 06:34:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.819 06:34:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.819 06:34:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.819 06:34:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.819 06:34:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.819 ************************************ 00:06:06.819 START TEST skip_rpc 00:06:06.819 ************************************ 00:06:06.819 06:34:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:06.819 06:34:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=508786 00:06:06.819 06:34:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.819 06:34:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.819 06:34:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.819 [2024-07-15 06:34:54.306350] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:06.819 [2024-07-15 06:34:54.306414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508786 ] 00:06:06.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.819 [2024-07-15 06:34:54.367682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.076 [2024-07-15 06:34:54.457312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 508786 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 508786 ']' 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 508786 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 508786 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 508786' 00:06:12.336 killing process with pid 508786 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 508786 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 508786 00:06:12.336 00:06:12.336 real 0m5.443s 00:06:12.336 user 0m5.137s 00:06:12.336 sys 0m0.303s 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.336 06:34:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.336 ************************************ 00:06:12.336 END TEST skip_rpc 00:06:12.336 ************************************ 00:06:12.336 06:34:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.336 06:34:59 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.336 06:34:59 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.336 06:34:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.336 ************************************ 00:06:12.336 START TEST skip_rpc_with_json 00:06:12.337 ************************************ 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=509473 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 509473 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 509473 ']' 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.337 06:34:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.337 [2024-07-15 06:34:59.804719] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:12.337 [2024-07-15 06:34:59.804817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509473 ] 00:06:12.337 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.337 [2024-07-15 06:34:59.868833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.596 [2024-07-15 06:34:59.957977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.854 [2024-07-15 06:35:00.226292] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.854 request: 00:06:12.854 { 00:06:12.854 "trtype": "tcp", 00:06:12.854 "method": "nvmf_get_transports", 00:06:12.854 "req_id": 1 00:06:12.854 } 00:06:12.854 Got JSON-RPC error response 00:06:12.854 response: 00:06:12.854 { 00:06:12.854 "code": -19, 00:06:12.854 "message": "No such device" 00:06:12.854 } 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.854 [2024-07-15 06:35:00.234386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.854 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.854 { 00:06:12.854 "subsystems": [ 00:06:12.854 { 00:06:12.854 "subsystem": "vfio_user_target", 00:06:12.854 "config": null 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "keyring", 00:06:12.854 "config": [] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "iobuf", 00:06:12.854 "config": [ 00:06:12.854 { 00:06:12.854 "method": "iobuf_set_options", 00:06:12.854 "params": { 00:06:12.854 "small_pool_count": 8192, 00:06:12.854 "large_pool_count": 1024, 00:06:12.854 "small_bufsize": 8192, 00:06:12.854 "large_bufsize": 135168 00:06:12.854 } 00:06:12.854 } 00:06:12.854 ] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "sock", 00:06:12.854 "config": [ 00:06:12.854 { 00:06:12.854 "method": "sock_set_default_impl", 00:06:12.854 "params": { 00:06:12.854 "impl_name": "posix" 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "sock_impl_set_options", 00:06:12.854 "params": { 00:06:12.854 "impl_name": "ssl", 00:06:12.854 "recv_buf_size": 4096, 00:06:12.854 "send_buf_size": 4096, 00:06:12.854 "enable_recv_pipe": true, 00:06:12.854 "enable_quickack": false, 00:06:12.854 "enable_placement_id": 0, 00:06:12.854 "enable_zerocopy_send_server": true, 00:06:12.854 "enable_zerocopy_send_client": false, 00:06:12.854 "zerocopy_threshold": 0, 00:06:12.854 "tls_version": 0, 00:06:12.854 "enable_ktls": false 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "sock_impl_set_options", 00:06:12.854 "params": { 00:06:12.854 "impl_name": "posix", 00:06:12.854 "recv_buf_size": 2097152, 00:06:12.854 "send_buf_size": 2097152, 00:06:12.854 "enable_recv_pipe": true, 00:06:12.854 "enable_quickack": false, 00:06:12.854 "enable_placement_id": 0, 00:06:12.854 "enable_zerocopy_send_server": true, 00:06:12.854 "enable_zerocopy_send_client": false, 00:06:12.854 "zerocopy_threshold": 0, 00:06:12.854 "tls_version": 0, 00:06:12.854 "enable_ktls": false 00:06:12.854 } 00:06:12.854 } 00:06:12.854 ] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "vmd", 00:06:12.854 "config": [] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "accel", 00:06:12.854 "config": [ 00:06:12.854 { 00:06:12.854 "method": "accel_set_options", 00:06:12.854 "params": { 00:06:12.854 "small_cache_size": 128, 00:06:12.854 "large_cache_size": 16, 00:06:12.854 "task_count": 2048, 00:06:12.854 "sequence_count": 2048, 00:06:12.854 "buf_count": 2048 00:06:12.854 } 00:06:12.854 } 00:06:12.854 ] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "bdev", 00:06:12.854 "config": [ 00:06:12.854 { 00:06:12.854 "method": "bdev_set_options", 00:06:12.854 "params": { 00:06:12.854 "bdev_io_pool_size": 65535, 00:06:12.854 "bdev_io_cache_size": 256, 00:06:12.854 "bdev_auto_examine": true, 00:06:12.854 "iobuf_small_cache_size": 128, 00:06:12.854 "iobuf_large_cache_size": 16 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "bdev_raid_set_options", 00:06:12.854 "params": { 00:06:12.854 "process_window_size_kb": 1024 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "bdev_iscsi_set_options", 00:06:12.854 "params": { 00:06:12.854 "timeout_sec": 30 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "bdev_nvme_set_options", 00:06:12.854 "params": { 00:06:12.854 "action_on_timeout": "none", 00:06:12.854 "timeout_us": 0, 00:06:12.854 "timeout_admin_us": 0, 00:06:12.854 "keep_alive_timeout_ms": 10000, 00:06:12.854 "arbitration_burst": 0, 00:06:12.854 "low_priority_weight": 0, 00:06:12.854 "medium_priority_weight": 0, 00:06:12.854 "high_priority_weight": 0, 00:06:12.854 "nvme_adminq_poll_period_us": 10000, 00:06:12.854 "nvme_ioq_poll_period_us": 0, 00:06:12.854 "io_queue_requests": 0, 00:06:12.854 "delay_cmd_submit": true, 00:06:12.854 "transport_retry_count": 4, 00:06:12.854 "bdev_retry_count": 3, 00:06:12.854 "transport_ack_timeout": 0, 00:06:12.854 "ctrlr_loss_timeout_sec": 0, 00:06:12.854 "reconnect_delay_sec": 0, 00:06:12.854 "fast_io_fail_timeout_sec": 0, 00:06:12.854 "disable_auto_failback": false, 00:06:12.854 "generate_uuids": false, 00:06:12.854 "transport_tos": 0, 00:06:12.854 "nvme_error_stat": false, 00:06:12.854 "rdma_srq_size": 0, 00:06:12.854 "io_path_stat": false, 00:06:12.854 "allow_accel_sequence": false, 00:06:12.854 "rdma_max_cq_size": 0, 00:06:12.854 "rdma_cm_event_timeout_ms": 0, 00:06:12.854 "dhchap_digests": [ 00:06:12.854 "sha256", 00:06:12.854 "sha384", 00:06:12.854 "sha512" 00:06:12.854 ], 00:06:12.854 "dhchap_dhgroups": [ 00:06:12.854 "null", 00:06:12.854 "ffdhe2048", 00:06:12.854 "ffdhe3072", 00:06:12.854 "ffdhe4096", 00:06:12.854 "ffdhe6144", 00:06:12.854 "ffdhe8192" 00:06:12.854 ] 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "bdev_nvme_set_hotplug", 00:06:12.854 "params": { 00:06:12.854 "period_us": 100000, 00:06:12.854 "enable": false 00:06:12.854 } 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "method": "bdev_wait_for_examine" 00:06:12.854 } 00:06:12.854 ] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "scsi", 00:06:12.854 "config": null 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "scheduler", 00:06:12.854 "config": [ 00:06:12.854 { 00:06:12.854 "method": "framework_set_scheduler", 00:06:12.854 "params": { 00:06:12.854 "name": "static" 00:06:12.854 } 00:06:12.854 } 00:06:12.854 ] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "vhost_scsi", 00:06:12.854 "config": [] 00:06:12.854 }, 00:06:12.854 { 00:06:12.854 "subsystem": "vhost_blk", 00:06:12.854 "config": [] 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "subsystem": "ublk", 00:06:12.855 "config": [] 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "subsystem": "nbd", 00:06:12.855 "config": [] 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "subsystem": "nvmf", 00:06:12.855 "config": [ 00:06:12.855 { 00:06:12.855 "method": "nvmf_set_config", 00:06:12.855 "params": { 00:06:12.855 "discovery_filter": "match_any", 00:06:12.855 "admin_cmd_passthru": { 00:06:12.855 "identify_ctrlr": false 00:06:12.855 } 00:06:12.855 } 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "method": "nvmf_set_max_subsystems", 00:06:12.855 "params": { 00:06:12.855 "max_subsystems": 1024 00:06:12.855 } 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "method": "nvmf_set_crdt", 00:06:12.855 "params": { 00:06:12.855 "crdt1": 0, 00:06:12.855 "crdt2": 0, 00:06:12.855 "crdt3": 0 00:06:12.855 } 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "method": "nvmf_create_transport", 00:06:12.855 "params": { 00:06:12.855 "trtype": "TCP", 00:06:12.855 "max_queue_depth": 128, 00:06:12.855 "max_io_qpairs_per_ctrlr": 127, 00:06:12.855 "in_capsule_data_size": 4096, 00:06:12.855 "max_io_size": 131072, 00:06:12.855 "io_unit_size": 131072, 00:06:12.855 "max_aq_depth": 128, 00:06:12.855 "num_shared_buffers": 511, 00:06:12.855 "buf_cache_size": 4294967295, 00:06:12.855 "dif_insert_or_strip": false, 00:06:12.855 "zcopy": false, 00:06:12.855 "c2h_success": true, 00:06:12.855 "sock_priority": 0, 00:06:12.855 "abort_timeout_sec": 1, 00:06:12.855 "ack_timeout": 0, 00:06:12.855 "data_wr_pool_size": 0 00:06:12.855 } 00:06:12.855 } 00:06:12.855 ] 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "subsystem": "iscsi", 00:06:12.855 "config": [ 00:06:12.855 { 00:06:12.855 "method": "iscsi_set_options", 00:06:12.855 "params": { 00:06:12.855 "node_base": "iqn.2016-06.io.spdk", 00:06:12.855 "max_sessions": 128, 00:06:12.855 "max_connections_per_session": 2, 00:06:12.855 "max_queue_depth": 64, 00:06:12.855 "default_time2wait": 2, 00:06:12.855 "default_time2retain": 20, 00:06:12.855 "first_burst_length": 8192, 00:06:12.855 "immediate_data": true, 00:06:12.855 "allow_duplicated_isid": false, 00:06:12.855 "error_recovery_level": 0, 00:06:12.855 "nop_timeout": 60, 00:06:12.855 "nop_in_interval": 30, 00:06:12.855 "disable_chap": false, 00:06:12.855 "require_chap": false, 00:06:12.855 "mutual_chap": false, 00:06:12.855 "chap_group": 0, 00:06:12.855 "max_large_datain_per_connection": 64, 00:06:12.855 "max_r2t_per_connection": 4, 00:06:12.855 "pdu_pool_size": 36864, 00:06:12.855 "immediate_data_pool_size": 16384, 00:06:12.855 "data_out_pool_size": 2048 00:06:12.855 } 00:06:12.855 } 00:06:12.855 ] 00:06:12.855 } 00:06:12.855 ] 00:06:12.855 } 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 509473 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 509473 ']' 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 509473 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 509473 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 509473' 00:06:12.855 killing process with pid 509473 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 509473 00:06:12.855 06:35:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 509473 00:06:13.425 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=509621 00:06:13.425 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.425 06:35:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 509621 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 509621 ']' 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 509621 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 509621 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 509621' 00:06:18.680 killing process with pid 509621 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 509621 00:06:18.680 06:35:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 509621 00:06:18.680 06:35:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.680 06:35:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.680 00:06:18.680 real 0m6.504s 00:06:18.680 user 0m6.088s 00:06:18.680 sys 0m0.688s 00:06:18.680 06:35:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.680 06:35:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.680 ************************************ 00:06:18.680 END TEST skip_rpc_with_json 00:06:18.680 ************************************ 00:06:18.680 06:35:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:18.680 06:35:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.680 06:35:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.680 06:35:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.938 ************************************ 00:06:18.938 START TEST skip_rpc_with_delay 00:06:18.938 ************************************ 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.938 [2024-07-15 06:35:06.353790] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:18.938 [2024-07-15 06:35:06.353926] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.938 00:06:18.938 real 0m0.066s 00:06:18.938 user 0m0.038s 00:06:18.938 sys 0m0.027s 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.938 06:35:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:18.938 ************************************ 00:06:18.938 END TEST skip_rpc_with_delay 00:06:18.938 ************************************ 00:06:18.938 06:35:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:18.938 06:35:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:18.938 06:35:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:18.938 06:35:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.938 06:35:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.938 06:35:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.938 ************************************ 00:06:18.938 START TEST exit_on_failed_rpc_init 00:06:18.938 ************************************ 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=510336 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 510336 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 510336 ']' 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.938 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.938 [2024-07-15 06:35:06.458415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.938 [2024-07-15 06:35:06.458493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510336 ] 00:06:18.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.938 [2024-07-15 06:35:06.516647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.197 [2024-07-15 06:35:06.606804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.455 06:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.455 [2024-07-15 06:35:06.909157] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:19.455 [2024-07-15 06:35:06.909251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510350 ] 00:06:19.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.455 [2024-07-15 06:35:06.970959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.455 [2024-07-15 06:35:07.065367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.455 [2024-07-15 06:35:07.065477] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.455 [2024-07-15 06:35:07.065500] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.455 [2024-07-15 06:35:07.065513] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 510336 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 510336 ']' 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 510336 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.712 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 510336 00:06:19.713 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.713 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.713 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 510336' 00:06:19.713 killing process with pid 510336 00:06:19.713 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 510336 00:06:19.713 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 510336 00:06:19.970 00:06:19.970 real 0m1.166s 00:06:19.970 user 0m1.253s 00:06:19.970 sys 0m0.461s 00:06:19.970 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.970 06:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.970 ************************************ 00:06:19.970 END TEST exit_on_failed_rpc_init 00:06:19.970 ************************************ 00:06:20.249 06:35:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.249 00:06:20.249 real 0m13.422s 00:06:20.249 user 0m12.616s 00:06:20.249 sys 0m1.640s 00:06:20.249 06:35:07 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.249 06:35:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 ************************************ 00:06:20.249 END TEST skip_rpc 00:06:20.249 ************************************ 00:06:20.249 06:35:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.249 06:35:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.249 06:35:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.249 06:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 ************************************ 00:06:20.249 START TEST rpc_client 00:06:20.249 ************************************ 00:06:20.249 06:35:07 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.249 * Looking for test storage... 00:06:20.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.249 06:35:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.249 OK 00:06:20.249 06:35:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.249 00:06:20.249 real 0m0.069s 00:06:20.249 user 0m0.031s 00:06:20.249 sys 0m0.043s 00:06:20.249 06:35:07 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.249 06:35:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 ************************************ 00:06:20.249 END TEST rpc_client 00:06:20.249 ************************************ 00:06:20.249 06:35:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.249 06:35:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.249 06:35:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.249 06:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 ************************************ 00:06:20.249 START TEST json_config 00:06:20.249 ************************************ 00:06:20.249 06:35:07 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.249 06:35:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.249 06:35:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.249 06:35:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.249 06:35:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.249 06:35:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.249 06:35:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.249 06:35:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.249 06:35:07 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.249 06:35:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@47 -- # : 0 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.249 06:35:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.250 06:35:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.250 06:35:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:20.250 INFO: JSON configuration test init 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.250 06:35:07 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.250 06:35:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.250 06:35:07 json_config -- json_config/common.sh@10 -- # shift 00:06:20.250 06:35:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.250 06:35:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.250 06:35:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.250 06:35:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.250 06:35:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.250 06:35:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=510586 00:06:20.250 06:35:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.250 06:35:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.250 Waiting for target to run... 00:06:20.250 06:35:07 json_config -- json_config/common.sh@25 -- # waitforlisten 510586 /var/tmp/spdk_tgt.sock 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@827 -- # '[' -z 510586 ']' 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.250 06:35:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.508 [2024-07-15 06:35:07.869631] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:20.508 [2024-07-15 06:35:07.869704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510586 ] 00:06:20.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.767 [2024-07-15 06:35:08.209961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.767 [2024-07-15 06:35:08.273342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.332 06:35:08 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.333 06:35:08 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:21.333 06:35:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.333 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:21.333 06:35:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:21.333 06:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:21.333 06:35:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.333 06:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.333 06:35:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:21.333 06:35:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:24.615 06:35:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:24.615 06:35:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:24.615 06:35:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:24.615 06:35:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:24.615 06:35:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:24.615 06:35:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:24.615 06:35:12 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:24.615 06:35:12 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:24.615 06:35:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.615 06:35:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:24.873 06:35:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:24.873 06:35:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:24.873 06:35:12 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.873 06:35:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.873 MallocForNvmf0 00:06:25.131 06:35:12 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.131 06:35:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.131 MallocForNvmf1 00:06:25.131 06:35:12 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.131 06:35:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.389 [2024-07-15 06:35:12.962803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.389 06:35:12 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.390 06:35:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.648 06:35:13 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.648 06:35:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.905 06:35:13 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.905 06:35:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.162 06:35:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.162 06:35:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.419 [2024-07-15 06:35:13.934020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.419 06:35:13 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:26.419 06:35:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.419 06:35:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.419 06:35:13 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:26.419 06:35:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.419 06:35:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.419 06:35:13 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:26.419 06:35:13 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.419 06:35:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.676 MallocBdevForConfigChangeCheck 00:06:26.676 06:35:14 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:26.676 06:35:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.676 06:35:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.676 06:35:14 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:26.676 06:35:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.242 06:35:14 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:27.242 INFO: shutting down applications... 00:06:27.242 06:35:14 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:27.242 06:35:14 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:27.242 06:35:14 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:27.242 06:35:14 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:29.138 Calling clear_iscsi_subsystem 00:06:29.138 Calling clear_nvmf_subsystem 00:06:29.138 Calling clear_nbd_subsystem 00:06:29.138 Calling clear_ublk_subsystem 00:06:29.138 Calling clear_vhost_blk_subsystem 00:06:29.138 Calling clear_vhost_scsi_subsystem 00:06:29.138 Calling clear_bdev_subsystem 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@345 -- # break 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:29.138 06:35:16 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:29.138 06:35:16 json_config -- json_config/common.sh@31 -- # local app=target 00:06:29.138 06:35:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.138 06:35:16 json_config -- json_config/common.sh@35 -- # [[ -n 510586 ]] 00:06:29.138 06:35:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 510586 00:06:29.138 06:35:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.138 06:35:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.138 06:35:16 json_config -- json_config/common.sh@41 -- # kill -0 510586 00:06:29.138 06:35:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.706 06:35:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.706 06:35:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.706 06:35:17 json_config -- json_config/common.sh@41 -- # kill -0 510586 00:06:29.706 06:35:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.706 06:35:17 json_config -- json_config/common.sh@43 -- # break 00:06:29.706 06:35:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.706 06:35:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.706 SPDK target shutdown done 00:06:29.706 06:35:17 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:29.706 INFO: relaunching applications... 00:06:29.706 06:35:17 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.706 06:35:17 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.706 06:35:17 json_config -- json_config/common.sh@10 -- # shift 00:06:29.706 06:35:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.706 06:35:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.706 06:35:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.706 06:35:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.706 06:35:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.706 06:35:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=511779 00:06:29.706 06:35:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.706 06:35:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.706 Waiting for target to run... 00:06:29.707 06:35:17 json_config -- json_config/common.sh@25 -- # waitforlisten 511779 /var/tmp/spdk_tgt.sock 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@827 -- # '[' -z 511779 ']' 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.707 06:35:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 [2024-07-15 06:35:17.212274] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:29.707 [2024-07-15 06:35:17.212386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511779 ] 00:06:29.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.275 [2024-07-15 06:35:17.734921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.275 [2024-07-15 06:35:17.815173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.558 [2024-07-15 06:35:20.847891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.558 [2024-07-15 06:35:20.880381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.124 06:35:21 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.124 06:35:21 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:34.124 06:35:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.124 00:06:34.124 06:35:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:34.124 06:35:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.124 INFO: Checking if target configuration is the same... 00:06:34.124 06:35:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.124 06:35:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:34.124 06:35:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.124 + '[' 2 -ne 2 ']' 00:06:34.124 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.124 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.124 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.124 +++ basename /dev/fd/62 00:06:34.124 ++ mktemp /tmp/62.XXX 00:06:34.124 + tmp_file_1=/tmp/62.uVi 00:06:34.124 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.124 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.124 + tmp_file_2=/tmp/spdk_tgt_config.json.2df 00:06:34.124 + ret=0 00:06:34.124 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.689 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.689 + diff -u /tmp/62.uVi /tmp/spdk_tgt_config.json.2df 00:06:34.689 + echo 'INFO: JSON config files are the same' 00:06:34.689 INFO: JSON config files are the same 00:06:34.689 + rm /tmp/62.uVi /tmp/spdk_tgt_config.json.2df 00:06:34.689 + exit 0 00:06:34.689 06:35:22 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:34.689 06:35:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.689 INFO: changing configuration and checking if this can be detected... 00:06:34.689 06:35:22 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.689 06:35:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.689 06:35:22 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.689 06:35:22 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:34.689 06:35:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.689 + '[' 2 -ne 2 ']' 00:06:34.689 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.948 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.948 +++ basename /dev/fd/62 00:06:34.948 ++ mktemp /tmp/62.XXX 00:06:34.948 + tmp_file_1=/tmp/62.73F 00:06:34.948 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.948 + tmp_file_2=/tmp/spdk_tgt_config.json.g3C 00:06:34.948 + ret=0 00:06:34.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.240 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.240 + diff -u /tmp/62.73F /tmp/spdk_tgt_config.json.g3C 00:06:35.240 + ret=1 00:06:35.240 + echo '=== Start of file: /tmp/62.73F ===' 00:06:35.240 + cat /tmp/62.73F 00:06:35.240 + echo '=== End of file: /tmp/62.73F ===' 00:06:35.240 + echo '' 00:06:35.240 + echo '=== Start of file: /tmp/spdk_tgt_config.json.g3C ===' 00:06:35.240 + cat /tmp/spdk_tgt_config.json.g3C 00:06:35.240 + echo '=== End of file: /tmp/spdk_tgt_config.json.g3C ===' 00:06:35.240 + echo '' 00:06:35.240 + rm /tmp/62.73F /tmp/spdk_tgt_config.json.g3C 00:06:35.240 + exit 1 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:35.240 INFO: configuration change detected. 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 511779 ]] 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.240 06:35:22 json_config -- json_config/json_config.sh@323 -- # killprocess 511779 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@946 -- # '[' -z 511779 ']' 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@950 -- # kill -0 511779 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@951 -- # uname 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 511779 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 511779' 00:06:35.240 killing process with pid 511779 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@965 -- # kill 511779 00:06:35.240 06:35:22 json_config -- common/autotest_common.sh@970 -- # wait 511779 00:06:37.137 06:35:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.137 06:35:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:37.137 06:35:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.137 06:35:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 06:35:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:37.137 06:35:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:37.137 INFO: Success 00:06:37.137 00:06:37.137 real 0m16.721s 00:06:37.137 user 0m18.658s 00:06:37.137 sys 0m2.021s 00:06:37.137 06:35:24 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.137 06:35:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 ************************************ 00:06:37.137 END TEST json_config 00:06:37.137 ************************************ 00:06:37.137 06:35:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.137 06:35:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.137 06:35:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.137 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 ************************************ 00:06:37.137 START TEST json_config_extra_key 00:06:37.137 ************************************ 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.137 06:35:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.137 06:35:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.137 06:35:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.137 06:35:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.137 06:35:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.137 06:35:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.137 06:35:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:37.137 06:35:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.137 06:35:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:37.137 INFO: launching applications... 00:06:37.137 06:35:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=512819 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.137 Waiting for target to run... 00:06:37.137 06:35:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 512819 /var/tmp/spdk_tgt.sock 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 512819 ']' 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.137 06:35:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 [2024-07-15 06:35:24.627771] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.137 [2024-07-15 06:35:24.627870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512819 ] 00:06:37.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.395 [2024-07-15 06:35:24.967950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.653 [2024-07-15 06:35:25.031692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.218 06:35:25 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.218 06:35:25 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:38.218 00:06:38.218 06:35:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:38.218 INFO: shutting down applications... 00:06:38.218 06:35:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 512819 ]] 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 512819 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 512819 00:06:38.218 06:35:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 512819 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.477 06:35:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.477 SPDK target shutdown done 00:06:38.477 06:35:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.477 Success 00:06:38.477 00:06:38.477 real 0m1.540s 00:06:38.477 user 0m1.515s 00:06:38.477 sys 0m0.430s 00:06:38.477 06:35:26 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.477 06:35:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.477 ************************************ 00:06:38.477 END TEST json_config_extra_key 00:06:38.477 ************************************ 00:06:38.735 06:35:26 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.735 06:35:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.735 06:35:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.735 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 ************************************ 00:06:38.735 START TEST alias_rpc 00:06:38.735 ************************************ 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.735 * Looking for test storage... 00:06:38.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:38.735 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.735 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=513012 00:06:38.735 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.735 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 513012 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 513012 ']' 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.735 06:35:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 [2024-07-15 06:35:26.221159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.735 [2024-07-15 06:35:26.221249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513012 ] 00:06:38.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.735 [2024-07-15 06:35:26.278065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.993 [2024-07-15 06:35:26.370642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.251 06:35:26 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.251 06:35:26 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.251 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:39.509 06:35:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 513012 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 513012 ']' 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 513012 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 513012 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 513012' 00:06:39.509 killing process with pid 513012 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@965 -- # kill 513012 00:06:39.509 06:35:26 alias_rpc -- common/autotest_common.sh@970 -- # wait 513012 00:06:39.767 00:06:39.767 real 0m1.203s 00:06:39.767 user 0m1.266s 00:06:39.767 sys 0m0.439s 00:06:39.767 06:35:27 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.767 06:35:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.767 ************************************ 00:06:39.767 END TEST alias_rpc 00:06:39.767 ************************************ 00:06:39.767 06:35:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:39.767 06:35:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.767 06:35:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.767 06:35:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.767 06:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:39.767 ************************************ 00:06:39.767 START TEST spdkcli_tcp 00:06:39.767 ************************************ 00:06:39.767 06:35:27 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.025 * Looking for test storage... 00:06:40.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=513216 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.025 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 513216 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 513216 ']' 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.025 06:35:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.026 [2024-07-15 06:35:27.470509] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:40.026 [2024-07-15 06:35:27.470589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513216 ] 00:06:40.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.026 [2024-07-15 06:35:27.532378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.026 [2024-07-15 06:35:27.617343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.026 [2024-07-15 06:35:27.617346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.284 06:35:27 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.284 06:35:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:40.284 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=513323 00:06:40.284 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.284 06:35:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.542 [ 00:06:40.542 "bdev_malloc_delete", 00:06:40.542 "bdev_malloc_create", 00:06:40.542 "bdev_null_resize", 00:06:40.542 "bdev_null_delete", 00:06:40.542 "bdev_null_create", 00:06:40.542 "bdev_nvme_cuse_unregister", 00:06:40.542 "bdev_nvme_cuse_register", 00:06:40.542 "bdev_opal_new_user", 00:06:40.542 "bdev_opal_set_lock_state", 00:06:40.542 "bdev_opal_delete", 00:06:40.542 "bdev_opal_get_info", 00:06:40.542 "bdev_opal_create", 00:06:40.542 "bdev_nvme_opal_revert", 00:06:40.542 "bdev_nvme_opal_init", 00:06:40.542 "bdev_nvme_send_cmd", 00:06:40.542 "bdev_nvme_get_path_iostat", 00:06:40.542 "bdev_nvme_get_mdns_discovery_info", 00:06:40.542 "bdev_nvme_stop_mdns_discovery", 00:06:40.542 "bdev_nvme_start_mdns_discovery", 00:06:40.542 "bdev_nvme_set_multipath_policy", 00:06:40.542 "bdev_nvme_set_preferred_path", 00:06:40.542 "bdev_nvme_get_io_paths", 00:06:40.542 "bdev_nvme_remove_error_injection", 00:06:40.542 "bdev_nvme_add_error_injection", 00:06:40.542 "bdev_nvme_get_discovery_info", 00:06:40.542 "bdev_nvme_stop_discovery", 00:06:40.542 "bdev_nvme_start_discovery", 00:06:40.542 "bdev_nvme_get_controller_health_info", 00:06:40.542 "bdev_nvme_disable_controller", 00:06:40.542 "bdev_nvme_enable_controller", 00:06:40.542 "bdev_nvme_reset_controller", 00:06:40.542 "bdev_nvme_get_transport_statistics", 00:06:40.542 "bdev_nvme_apply_firmware", 00:06:40.542 "bdev_nvme_detach_controller", 00:06:40.542 "bdev_nvme_get_controllers", 00:06:40.542 "bdev_nvme_attach_controller", 00:06:40.542 "bdev_nvme_set_hotplug", 00:06:40.542 "bdev_nvme_set_options", 00:06:40.542 "bdev_passthru_delete", 00:06:40.542 "bdev_passthru_create", 00:06:40.542 "bdev_lvol_set_parent_bdev", 00:06:40.542 "bdev_lvol_set_parent", 00:06:40.542 "bdev_lvol_check_shallow_copy", 00:06:40.542 "bdev_lvol_start_shallow_copy", 00:06:40.542 "bdev_lvol_grow_lvstore", 00:06:40.542 "bdev_lvol_get_lvols", 00:06:40.542 "bdev_lvol_get_lvstores", 00:06:40.542 "bdev_lvol_delete", 00:06:40.542 "bdev_lvol_set_read_only", 00:06:40.542 "bdev_lvol_resize", 00:06:40.542 "bdev_lvol_decouple_parent", 00:06:40.542 "bdev_lvol_inflate", 00:06:40.542 "bdev_lvol_rename", 00:06:40.542 "bdev_lvol_clone_bdev", 00:06:40.542 "bdev_lvol_clone", 00:06:40.542 "bdev_lvol_snapshot", 00:06:40.542 "bdev_lvol_create", 00:06:40.542 "bdev_lvol_delete_lvstore", 00:06:40.542 "bdev_lvol_rename_lvstore", 00:06:40.542 "bdev_lvol_create_lvstore", 00:06:40.542 "bdev_raid_set_options", 00:06:40.542 "bdev_raid_remove_base_bdev", 00:06:40.542 "bdev_raid_add_base_bdev", 00:06:40.542 "bdev_raid_delete", 00:06:40.542 "bdev_raid_create", 00:06:40.542 "bdev_raid_get_bdevs", 00:06:40.542 "bdev_error_inject_error", 00:06:40.542 "bdev_error_delete", 00:06:40.542 "bdev_error_create", 00:06:40.542 "bdev_split_delete", 00:06:40.542 "bdev_split_create", 00:06:40.542 "bdev_delay_delete", 00:06:40.542 "bdev_delay_create", 00:06:40.542 "bdev_delay_update_latency", 00:06:40.542 "bdev_zone_block_delete", 00:06:40.542 "bdev_zone_block_create", 00:06:40.542 "blobfs_create", 00:06:40.542 "blobfs_detect", 00:06:40.542 "blobfs_set_cache_size", 00:06:40.542 "bdev_aio_delete", 00:06:40.542 "bdev_aio_rescan", 00:06:40.542 "bdev_aio_create", 00:06:40.542 "bdev_ftl_set_property", 00:06:40.542 "bdev_ftl_get_properties", 00:06:40.542 "bdev_ftl_get_stats", 00:06:40.542 "bdev_ftl_unmap", 00:06:40.542 "bdev_ftl_unload", 00:06:40.542 "bdev_ftl_delete", 00:06:40.542 "bdev_ftl_load", 00:06:40.542 "bdev_ftl_create", 00:06:40.542 "bdev_virtio_attach_controller", 00:06:40.542 "bdev_virtio_scsi_get_devices", 00:06:40.542 "bdev_virtio_detach_controller", 00:06:40.542 "bdev_virtio_blk_set_hotplug", 00:06:40.542 "bdev_iscsi_delete", 00:06:40.542 "bdev_iscsi_create", 00:06:40.542 "bdev_iscsi_set_options", 00:06:40.542 "accel_error_inject_error", 00:06:40.542 "ioat_scan_accel_module", 00:06:40.542 "dsa_scan_accel_module", 00:06:40.542 "iaa_scan_accel_module", 00:06:40.542 "vfu_virtio_create_scsi_endpoint", 00:06:40.542 "vfu_virtio_scsi_remove_target", 00:06:40.542 "vfu_virtio_scsi_add_target", 00:06:40.542 "vfu_virtio_create_blk_endpoint", 00:06:40.542 "vfu_virtio_delete_endpoint", 00:06:40.542 "keyring_file_remove_key", 00:06:40.542 "keyring_file_add_key", 00:06:40.542 "keyring_linux_set_options", 00:06:40.542 "iscsi_get_histogram", 00:06:40.542 "iscsi_enable_histogram", 00:06:40.542 "iscsi_set_options", 00:06:40.542 "iscsi_get_auth_groups", 00:06:40.542 "iscsi_auth_group_remove_secret", 00:06:40.542 "iscsi_auth_group_add_secret", 00:06:40.542 "iscsi_delete_auth_group", 00:06:40.542 "iscsi_create_auth_group", 00:06:40.542 "iscsi_set_discovery_auth", 00:06:40.542 "iscsi_get_options", 00:06:40.542 "iscsi_target_node_request_logout", 00:06:40.542 "iscsi_target_node_set_redirect", 00:06:40.542 "iscsi_target_node_set_auth", 00:06:40.542 "iscsi_target_node_add_lun", 00:06:40.542 "iscsi_get_stats", 00:06:40.542 "iscsi_get_connections", 00:06:40.542 "iscsi_portal_group_set_auth", 00:06:40.542 "iscsi_start_portal_group", 00:06:40.542 "iscsi_delete_portal_group", 00:06:40.542 "iscsi_create_portal_group", 00:06:40.542 "iscsi_get_portal_groups", 00:06:40.542 "iscsi_delete_target_node", 00:06:40.542 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.542 "iscsi_target_node_add_pg_ig_maps", 00:06:40.542 "iscsi_create_target_node", 00:06:40.543 "iscsi_get_target_nodes", 00:06:40.543 "iscsi_delete_initiator_group", 00:06:40.543 "iscsi_initiator_group_remove_initiators", 00:06:40.543 "iscsi_initiator_group_add_initiators", 00:06:40.543 "iscsi_create_initiator_group", 00:06:40.543 "iscsi_get_initiator_groups", 00:06:40.543 "nvmf_set_crdt", 00:06:40.543 "nvmf_set_config", 00:06:40.543 "nvmf_set_max_subsystems", 00:06:40.543 "nvmf_stop_mdns_prr", 00:06:40.543 "nvmf_publish_mdns_prr", 00:06:40.543 "nvmf_subsystem_get_listeners", 00:06:40.543 "nvmf_subsystem_get_qpairs", 00:06:40.543 "nvmf_subsystem_get_controllers", 00:06:40.543 "nvmf_get_stats", 00:06:40.543 "nvmf_get_transports", 00:06:40.543 "nvmf_create_transport", 00:06:40.543 "nvmf_get_targets", 00:06:40.543 "nvmf_delete_target", 00:06:40.543 "nvmf_create_target", 00:06:40.543 "nvmf_subsystem_allow_any_host", 00:06:40.543 "nvmf_subsystem_remove_host", 00:06:40.543 "nvmf_subsystem_add_host", 00:06:40.543 "nvmf_ns_remove_host", 00:06:40.543 "nvmf_ns_add_host", 00:06:40.543 "nvmf_subsystem_remove_ns", 00:06:40.543 "nvmf_subsystem_add_ns", 00:06:40.543 "nvmf_subsystem_listener_set_ana_state", 00:06:40.543 "nvmf_discovery_get_referrals", 00:06:40.543 "nvmf_discovery_remove_referral", 00:06:40.543 "nvmf_discovery_add_referral", 00:06:40.543 "nvmf_subsystem_remove_listener", 00:06:40.543 "nvmf_subsystem_add_listener", 00:06:40.543 "nvmf_delete_subsystem", 00:06:40.543 "nvmf_create_subsystem", 00:06:40.543 "nvmf_get_subsystems", 00:06:40.543 "env_dpdk_get_mem_stats", 00:06:40.543 "nbd_get_disks", 00:06:40.543 "nbd_stop_disk", 00:06:40.543 "nbd_start_disk", 00:06:40.543 "ublk_recover_disk", 00:06:40.543 "ublk_get_disks", 00:06:40.543 "ublk_stop_disk", 00:06:40.543 "ublk_start_disk", 00:06:40.543 "ublk_destroy_target", 00:06:40.543 "ublk_create_target", 00:06:40.543 "virtio_blk_create_transport", 00:06:40.543 "virtio_blk_get_transports", 00:06:40.543 "vhost_controller_set_coalescing", 00:06:40.543 "vhost_get_controllers", 00:06:40.543 "vhost_delete_controller", 00:06:40.543 "vhost_create_blk_controller", 00:06:40.543 "vhost_scsi_controller_remove_target", 00:06:40.543 "vhost_scsi_controller_add_target", 00:06:40.543 "vhost_start_scsi_controller", 00:06:40.543 "vhost_create_scsi_controller", 00:06:40.543 "thread_set_cpumask", 00:06:40.543 "framework_get_scheduler", 00:06:40.543 "framework_set_scheduler", 00:06:40.543 "framework_get_reactors", 00:06:40.543 "thread_get_io_channels", 00:06:40.543 "thread_get_pollers", 00:06:40.543 "thread_get_stats", 00:06:40.543 "framework_monitor_context_switch", 00:06:40.543 "spdk_kill_instance", 00:06:40.543 "log_enable_timestamps", 00:06:40.543 "log_get_flags", 00:06:40.543 "log_clear_flag", 00:06:40.543 "log_set_flag", 00:06:40.543 "log_get_level", 00:06:40.543 "log_set_level", 00:06:40.543 "log_get_print_level", 00:06:40.543 "log_set_print_level", 00:06:40.543 "framework_enable_cpumask_locks", 00:06:40.543 "framework_disable_cpumask_locks", 00:06:40.543 "framework_wait_init", 00:06:40.543 "framework_start_init", 00:06:40.543 "scsi_get_devices", 00:06:40.543 "bdev_get_histogram", 00:06:40.543 "bdev_enable_histogram", 00:06:40.543 "bdev_set_qos_limit", 00:06:40.543 "bdev_set_qd_sampling_period", 00:06:40.543 "bdev_get_bdevs", 00:06:40.543 "bdev_reset_iostat", 00:06:40.543 "bdev_get_iostat", 00:06:40.543 "bdev_examine", 00:06:40.543 "bdev_wait_for_examine", 00:06:40.543 "bdev_set_options", 00:06:40.543 "notify_get_notifications", 00:06:40.543 "notify_get_types", 00:06:40.543 "accel_get_stats", 00:06:40.543 "accel_set_options", 00:06:40.543 "accel_set_driver", 00:06:40.543 "accel_crypto_key_destroy", 00:06:40.543 "accel_crypto_keys_get", 00:06:40.543 "accel_crypto_key_create", 00:06:40.543 "accel_assign_opc", 00:06:40.543 "accel_get_module_info", 00:06:40.543 "accel_get_opc_assignments", 00:06:40.543 "vmd_rescan", 00:06:40.543 "vmd_remove_device", 00:06:40.543 "vmd_enable", 00:06:40.543 "sock_get_default_impl", 00:06:40.543 "sock_set_default_impl", 00:06:40.543 "sock_impl_set_options", 00:06:40.543 "sock_impl_get_options", 00:06:40.543 "iobuf_get_stats", 00:06:40.543 "iobuf_set_options", 00:06:40.543 "keyring_get_keys", 00:06:40.543 "framework_get_pci_devices", 00:06:40.543 "framework_get_config", 00:06:40.543 "framework_get_subsystems", 00:06:40.543 "vfu_tgt_set_base_path", 00:06:40.543 "trace_get_info", 00:06:40.543 "trace_get_tpoint_group_mask", 00:06:40.543 "trace_disable_tpoint_group", 00:06:40.543 "trace_enable_tpoint_group", 00:06:40.543 "trace_clear_tpoint_mask", 00:06:40.543 "trace_set_tpoint_mask", 00:06:40.543 "spdk_get_version", 00:06:40.543 "rpc_get_methods" 00:06:40.543 ] 00:06:40.543 06:35:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.543 06:35:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.543 06:35:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.801 06:35:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.801 06:35:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 513216 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 513216 ']' 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 513216 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 513216 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 513216' 00:06:40.801 killing process with pid 513216 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 513216 00:06:40.801 06:35:28 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 513216 00:06:41.059 00:06:41.059 real 0m1.230s 00:06:41.059 user 0m2.205s 00:06:41.059 sys 0m0.450s 00:06:41.059 06:35:28 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.059 06:35:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.059 ************************************ 00:06:41.059 END TEST spdkcli_tcp 00:06:41.059 ************************************ 00:06:41.059 06:35:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.059 06:35:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.059 06:35:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.059 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.059 ************************************ 00:06:41.059 START TEST dpdk_mem_utility 00:06:41.059 ************************************ 00:06:41.059 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.317 * Looking for test storage... 00:06:41.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:41.317 06:35:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.318 06:35:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=513519 00:06:41.318 06:35:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.318 06:35:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 513519 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 513519 ']' 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.318 06:35:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.318 [2024-07-15 06:35:28.739116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.318 [2024-07-15 06:35:28.739209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513519 ] 00:06:41.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.318 [2024-07-15 06:35:28.801752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.318 [2024-07-15 06:35:28.893021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.576 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.576 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:41.576 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.576 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.576 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.576 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.576 { 00:06:41.576 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.576 } 00:06:41.576 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.576 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.834 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:41.834 1 heaps totaling size 814.000000 MiB 00:06:41.834 size: 814.000000 MiB heap id: 0 00:06:41.834 end heaps---------- 00:06:41.834 8 mempools totaling size 598.116089 MiB 00:06:41.834 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.834 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.834 size: 84.521057 MiB name: bdev_io_513519 00:06:41.834 size: 51.011292 MiB name: evtpool_513519 00:06:41.834 size: 50.003479 MiB name: msgpool_513519 00:06:41.834 size: 21.763794 MiB name: PDU_Pool 00:06:41.834 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.834 size: 0.026123 MiB name: Session_Pool 00:06:41.834 end mempools------- 00:06:41.834 6 memzones totaling size 4.142822 MiB 00:06:41.834 size: 1.000366 MiB name: RG_ring_0_513519 00:06:41.834 size: 1.000366 MiB name: RG_ring_1_513519 00:06:41.834 size: 1.000366 MiB name: RG_ring_4_513519 00:06:41.834 size: 1.000366 MiB name: RG_ring_5_513519 00:06:41.834 size: 0.125366 MiB name: RG_ring_2_513519 00:06:41.834 size: 0.015991 MiB name: RG_ring_3_513519 00:06:41.834 end memzones------- 00:06:41.834 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.834 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:41.834 list of free elements. size: 12.519348 MiB 00:06:41.834 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.834 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:41.834 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:41.834 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:41.834 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:41.834 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:41.834 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:41.834 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:41.834 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:41.834 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:41.834 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:41.834 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:41.834 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:41.834 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:41.834 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:41.834 list of standard malloc elements. size: 199.218079 MiB 00:06:41.834 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:41.834 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:41.834 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.834 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:41.834 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:41.834 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.834 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:41.834 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.834 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:41.834 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:41.834 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:41.835 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:41.835 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:41.835 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:41.835 list of memzone associated elements. size: 602.262573 MiB 00:06:41.835 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:41.835 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.835 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:41.835 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.835 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:41.835 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_513519_0 00:06:41.835 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.835 associated memzone info: size: 48.002930 MiB name: MP_evtpool_513519_0 00:06:41.835 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.835 associated memzone info: size: 48.002930 MiB name: MP_msgpool_513519_0 00:06:41.835 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:41.835 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.835 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:41.835 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.835 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.835 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_513519 00:06:41.835 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.835 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_513519 00:06:41.835 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.835 associated memzone info: size: 1.007996 MiB name: MP_evtpool_513519 00:06:41.835 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:41.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.835 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:41.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.835 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:41.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.835 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:41.835 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.835 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.835 associated memzone info: size: 1.000366 MiB name: RG_ring_0_513519 00:06:41.835 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.835 associated memzone info: size: 1.000366 MiB name: RG_ring_1_513519 00:06:41.835 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:41.835 associated memzone info: size: 1.000366 MiB name: RG_ring_4_513519 00:06:41.835 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:41.835 associated memzone info: size: 1.000366 MiB name: RG_ring_5_513519 00:06:41.835 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:41.835 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_513519 00:06:41.835 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:41.835 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.835 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:41.835 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.835 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:41.835 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.835 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:41.835 associated memzone info: size: 0.125366 MiB name: RG_ring_2_513519 00:06:41.835 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:41.835 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.835 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:41.835 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.835 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:41.835 associated memzone info: size: 0.015991 MiB name: RG_ring_3_513519 00:06:41.835 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:41.835 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.835 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:41.835 associated memzone info: size: 0.000183 MiB name: MP_msgpool_513519 00:06:41.835 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:41.835 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_513519 00:06:41.835 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:41.835 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.835 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.835 06:35:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 513519 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 513519 ']' 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 513519 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 513519 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 513519' 00:06:41.835 killing process with pid 513519 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 513519 00:06:41.835 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 513519 00:06:42.402 00:06:42.402 real 0m1.072s 00:06:42.402 user 0m1.021s 00:06:42.402 sys 0m0.409s 00:06:42.402 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.402 06:35:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.402 ************************************ 00:06:42.402 END TEST dpdk_mem_utility 00:06:42.402 ************************************ 00:06:42.402 06:35:29 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.402 06:35:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.402 06:35:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.402 06:35:29 -- common/autotest_common.sh@10 -- # set +x 00:06:42.402 ************************************ 00:06:42.402 START TEST event 00:06:42.402 ************************************ 00:06:42.402 06:35:29 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.402 * Looking for test storage... 00:06:42.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.402 06:35:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.402 06:35:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.402 06:35:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.402 06:35:29 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:42.402 06:35:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.402 06:35:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.402 ************************************ 00:06:42.402 START TEST event_perf 00:06:42.402 ************************************ 00:06:42.403 06:35:29 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.403 Running I/O for 1 seconds...[2024-07-15 06:35:29.841470] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.403 [2024-07-15 06:35:29.841536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513707 ] 00:06:42.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.403 [2024-07-15 06:35:29.901470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.403 [2024-07-15 06:35:29.992082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.403 [2024-07-15 06:35:29.992137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.403 [2024-07-15 06:35:29.992200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.403 [2024-07-15 06:35:29.992202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.775 Running I/O for 1 seconds... 00:06:43.775 lcore 0: 233251 00:06:43.775 lcore 1: 233251 00:06:43.775 lcore 2: 233250 00:06:43.775 lcore 3: 233251 00:06:43.775 done. 00:06:43.775 00:06:43.775 real 0m1.249s 00:06:43.775 user 0m4.153s 00:06:43.775 sys 0m0.092s 00:06:43.775 06:35:31 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.775 06:35:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.775 ************************************ 00:06:43.775 END TEST event_perf 00:06:43.775 ************************************ 00:06:43.775 06:35:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.775 06:35:31 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:43.775 06:35:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.775 06:35:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.775 ************************************ 00:06:43.775 START TEST event_reactor 00:06:43.775 ************************************ 00:06:43.775 06:35:31 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.775 [2024-07-15 06:35:31.140453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:43.775 [2024-07-15 06:35:31.140521] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513870 ] 00:06:43.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.775 [2024-07-15 06:35:31.205065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.775 [2024-07-15 06:35:31.296250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.150 test_start 00:06:45.150 oneshot 00:06:45.150 tick 100 00:06:45.150 tick 100 00:06:45.150 tick 250 00:06:45.150 tick 100 00:06:45.150 tick 100 00:06:45.150 tick 100 00:06:45.150 tick 250 00:06:45.150 tick 500 00:06:45.150 tick 100 00:06:45.150 tick 100 00:06:45.150 tick 250 00:06:45.150 tick 100 00:06:45.150 tick 100 00:06:45.150 test_end 00:06:45.150 00:06:45.150 real 0m1.252s 00:06:45.150 user 0m1.161s 00:06:45.150 sys 0m0.086s 00:06:45.150 06:35:32 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.150 06:35:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:45.150 ************************************ 00:06:45.150 END TEST event_reactor 00:06:45.150 ************************************ 00:06:45.150 06:35:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.150 06:35:32 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:45.150 06:35:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.150 06:35:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.150 ************************************ 00:06:45.150 START TEST event_reactor_perf 00:06:45.150 ************************************ 00:06:45.150 06:35:32 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.150 [2024-07-15 06:35:32.445638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.150 [2024-07-15 06:35:32.445705] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514022 ] 00:06:45.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.150 [2024-07-15 06:35:32.510225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.150 [2024-07-15 06:35:32.600112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.085 test_start 00:06:46.085 test_end 00:06:46.085 Performance: 356190 events per second 00:06:46.085 00:06:46.085 real 0m1.252s 00:06:46.085 user 0m1.162s 00:06:46.085 sys 0m0.085s 00:06:46.085 06:35:33 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.085 06:35:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.085 ************************************ 00:06:46.085 END TEST event_reactor_perf 00:06:46.085 ************************************ 00:06:46.343 06:35:33 event -- event/event.sh@49 -- # uname -s 00:06:46.343 06:35:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.343 06:35:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.343 06:35:33 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.343 06:35:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.343 06:35:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.343 ************************************ 00:06:46.343 START TEST event_scheduler 00:06:46.343 ************************************ 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.343 * Looking for test storage... 00:06:46.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:46.343 06:35:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.343 06:35:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=514202 00:06:46.343 06:35:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.343 06:35:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.343 06:35:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 514202 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 514202 ']' 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.343 06:35:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.343 [2024-07-15 06:35:33.825997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:46.343 [2024-07-15 06:35:33.826076] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514202 ] 00:06:46.343 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.343 [2024-07-15 06:35:33.889611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.624 [2024-07-15 06:35:33.978527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.624 [2024-07-15 06:35:33.978581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.624 [2024-07-15 06:35:33.978652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.624 [2024-07-15 06:35:33.978656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:46.624 06:35:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.624 POWER: Env isn't set yet! 00:06:46.624 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:46.624 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:46.624 POWER: Cannot get available frequencies of lcore 0 00:06:46.624 POWER: Attempting to initialise PSTAT power management... 00:06:46.624 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:46.624 POWER: Initialized successfully for lcore 0 power management 00:06:46.624 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:46.624 POWER: Initialized successfully for lcore 1 power management 00:06:46.624 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:46.624 POWER: Initialized successfully for lcore 2 power management 00:06:46.624 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:46.624 POWER: Initialized successfully for lcore 3 power management 00:06:46.624 [2024-07-15 06:35:34.060059] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.624 [2024-07-15 06:35:34.060076] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.624 [2024-07-15 06:35:34.060086] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.624 06:35:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.624 [2024-07-15 06:35:34.162550] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.624 06:35:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.624 06:35:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.624 ************************************ 00:06:46.624 START TEST scheduler_create_thread 00:06:46.624 ************************************ 00:06:46.624 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:46.624 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.624 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.624 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.624 2 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.625 3 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.625 4 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.625 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 5 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 6 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 7 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 8 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 9 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 10 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 06:35:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.822 06:35:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.822 06:35:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.822 06:35:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.822 06:35:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.201 06:35:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.201 06:35:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:49.201 06:35:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:49.201 06:35:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.201 06:35:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.141 06:35:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.141 00:06:50.141 real 0m3.377s 00:06:50.141 user 0m0.007s 00:06:50.141 sys 0m0.007s 00:06:50.141 06:35:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.141 06:35:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.141 ************************************ 00:06:50.141 END TEST scheduler_create_thread 00:06:50.141 ************************************ 00:06:50.141 06:35:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.141 06:35:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 514202 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 514202 ']' 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 514202 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 514202 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 514202' 00:06:50.141 killing process with pid 514202 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 514202 00:06:50.141 06:35:37 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 514202 00:06:50.400 [2024-07-15 06:35:37.947525] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.661 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:50.661 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:50.661 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:50.661 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:50.661 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:50.661 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:50.661 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:50.661 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:50.661 00:06:50.661 real 0m4.486s 00:06:50.661 user 0m7.954s 00:06:50.661 sys 0m0.323s 00:06:50.661 06:35:38 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.661 06:35:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.661 ************************************ 00:06:50.661 END TEST event_scheduler 00:06:50.661 ************************************ 00:06:50.661 06:35:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.661 06:35:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.661 06:35:38 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.661 06:35:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.661 06:35:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.921 ************************************ 00:06:50.921 START TEST app_repeat 00:06:50.921 ************************************ 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=514787 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 514787' 00:06:50.921 Process app_repeat pid: 514787 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.921 spdk_app_start Round 0 00:06:50.921 06:35:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 514787 /var/tmp/spdk-nbd.sock 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 514787 ']' 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.921 06:35:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.921 [2024-07-15 06:35:38.300498] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.921 [2024-07-15 06:35:38.300555] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514787 ] 00:06:50.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.922 [2024-07-15 06:35:38.358689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.922 [2024-07-15 06:35:38.446072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.922 [2024-07-15 06:35:38.446076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.181 06:35:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.181 06:35:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:51.181 06:35:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.440 Malloc0 00:06:51.440 06:35:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.699 Malloc1 00:06:51.699 06:35:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.699 06:35:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.957 /dev/nbd0 00:06:51.957 06:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.957 06:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.957 1+0 records in 00:06:51.957 1+0 records out 00:06:51.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192952 s, 21.2 MB/s 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:51.957 06:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:51.957 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.957 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.957 06:35:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.216 /dev/nbd1 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.216 1+0 records in 00:06:52.216 1+0 records out 00:06:52.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189344 s, 21.6 MB/s 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:52.216 06:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.216 06:35:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.474 { 00:06:52.474 "nbd_device": "/dev/nbd0", 00:06:52.474 "bdev_name": "Malloc0" 00:06:52.474 }, 00:06:52.474 { 00:06:52.474 "nbd_device": "/dev/nbd1", 00:06:52.474 "bdev_name": "Malloc1" 00:06:52.474 } 00:06:52.474 ]' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.474 { 00:06:52.474 "nbd_device": "/dev/nbd0", 00:06:52.474 "bdev_name": "Malloc0" 00:06:52.474 }, 00:06:52.474 { 00:06:52.474 "nbd_device": "/dev/nbd1", 00:06:52.474 "bdev_name": "Malloc1" 00:06:52.474 } 00:06:52.474 ]' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.474 /dev/nbd1' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.474 /dev/nbd1' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.474 256+0 records in 00:06:52.474 256+0 records out 00:06:52.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499657 s, 210 MB/s 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.474 256+0 records in 00:06:52.474 256+0 records out 00:06:52.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235416 s, 44.5 MB/s 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.474 06:35:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.474 256+0 records in 00:06:52.474 256+0 records out 00:06:52.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260752 s, 40.2 MB/s 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.474 06:35:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.475 06:35:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.475 06:35:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.475 06:35:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.475 06:35:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.733 06:35:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.991 06:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.248 06:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.248 06:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.248 06:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.507 06:35:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.507 06:35:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.766 06:35:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.025 [2024-07-15 06:35:41.386188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.025 [2024-07-15 06:35:41.475810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.025 [2024-07-15 06:35:41.475813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.025 [2024-07-15 06:35:41.537267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.025 [2024-07-15 06:35:41.537346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.559 06:35:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.559 06:35:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.559 spdk_app_start Round 1 00:06:56.559 06:35:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 514787 /var/tmp/spdk-nbd.sock 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 514787 ']' 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.559 06:35:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.815 06:35:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.815 06:35:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:56.815 06:35:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.072 Malloc0 00:06:57.072 06:35:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.330 Malloc1 00:06:57.330 06:35:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.330 06:35:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.589 /dev/nbd0 00:06:57.589 06:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.589 06:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.589 1+0 records in 00:06:57.589 1+0 records out 00:06:57.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193707 s, 21.1 MB/s 00:06:57.589 06:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.847 06:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:57.847 06:35:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.847 06:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:57.847 06:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:57.847 06:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.847 06:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.847 06:35:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.847 /dev/nbd1 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.107 1+0 records in 00:06:58.107 1+0 records out 00:06:58.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177096 s, 23.1 MB/s 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:58.107 06:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.107 06:35:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.366 { 00:06:58.366 "nbd_device": "/dev/nbd0", 00:06:58.366 "bdev_name": "Malloc0" 00:06:58.366 }, 00:06:58.366 { 00:06:58.366 "nbd_device": "/dev/nbd1", 00:06:58.366 "bdev_name": "Malloc1" 00:06:58.366 } 00:06:58.366 ]' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.366 { 00:06:58.366 "nbd_device": "/dev/nbd0", 00:06:58.366 "bdev_name": "Malloc0" 00:06:58.366 }, 00:06:58.366 { 00:06:58.366 "nbd_device": "/dev/nbd1", 00:06:58.366 "bdev_name": "Malloc1" 00:06:58.366 } 00:06:58.366 ]' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.366 /dev/nbd1' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.366 /dev/nbd1' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.366 256+0 records in 00:06:58.366 256+0 records out 00:06:58.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503513 s, 208 MB/s 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.366 256+0 records in 00:06:58.366 256+0 records out 00:06:58.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235042 s, 44.6 MB/s 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.366 256+0 records in 00:06:58.366 256+0 records out 00:06:58.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250496 s, 41.9 MB/s 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.366 06:35:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.624 06:35:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.901 06:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.901 06:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.901 06:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.901 06:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.901 06:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.902 06:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.159 06:35:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.159 06:35:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.418 06:35:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.678 [2024-07-15 06:35:47.165748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.678 [2024-07-15 06:35:47.255225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.678 [2024-07-15 06:35:47.255231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.935 [2024-07-15 06:35:47.317336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.935 [2024-07-15 06:35:47.317412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.473 06:35:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.473 06:35:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:02.473 spdk_app_start Round 2 00:07:02.473 06:35:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 514787 /var/tmp/spdk-nbd.sock 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 514787 ']' 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.473 06:35:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 06:35:50 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.731 06:35:50 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:02.731 06:35:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.989 Malloc0 00:07:02.989 06:35:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.247 Malloc1 00:07:03.247 06:35:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.247 06:35:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.505 /dev/nbd0 00:07:03.505 06:35:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.505 06:35:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.505 1+0 records in 00:07:03.505 1+0 records out 00:07:03.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206058 s, 19.9 MB/s 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:03.505 06:35:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:03.505 06:35:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.505 06:35:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.505 06:35:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.763 /dev/nbd1 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.763 1+0 records in 00:07:03.763 1+0 records out 00:07:03.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210404 s, 19.5 MB/s 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:03.763 06:35:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.763 06:35:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.020 { 00:07:04.020 "nbd_device": "/dev/nbd0", 00:07:04.020 "bdev_name": "Malloc0" 00:07:04.020 }, 00:07:04.020 { 00:07:04.020 "nbd_device": "/dev/nbd1", 00:07:04.020 "bdev_name": "Malloc1" 00:07:04.020 } 00:07:04.020 ]' 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.020 { 00:07:04.020 "nbd_device": "/dev/nbd0", 00:07:04.020 "bdev_name": "Malloc0" 00:07:04.020 }, 00:07:04.020 { 00:07:04.020 "nbd_device": "/dev/nbd1", 00:07:04.020 "bdev_name": "Malloc1" 00:07:04.020 } 00:07:04.020 ]' 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.020 /dev/nbd1' 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.020 /dev/nbd1' 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.020 06:35:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.021 256+0 records in 00:07:04.021 256+0 records out 00:07:04.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481137 s, 218 MB/s 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.021 256+0 records in 00:07:04.021 256+0 records out 00:07:04.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234255 s, 44.8 MB/s 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.021 256+0 records in 00:07:04.021 256+0 records out 00:07:04.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220634 s, 47.5 MB/s 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.021 06:35:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.280 06:35:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.571 06:35:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.829 06:35:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.829 06:35:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.088 06:35:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.348 [2024-07-15 06:35:52.921487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.607 [2024-07-15 06:35:53.011236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.607 [2024-07-15 06:35:53.011242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.607 [2024-07-15 06:35:53.072987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.607 [2024-07-15 06:35:53.073059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.145 06:35:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 514787 /var/tmp/spdk-nbd.sock 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 514787 ']' 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.145 06:35:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:08.405 06:35:55 event.app_repeat -- event/event.sh@39 -- # killprocess 514787 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 514787 ']' 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 514787 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 514787 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 514787' 00:07:08.405 killing process with pid 514787 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@965 -- # kill 514787 00:07:08.405 06:35:55 event.app_repeat -- common/autotest_common.sh@970 -- # wait 514787 00:07:08.664 spdk_app_start is called in Round 0. 00:07:08.664 Shutdown signal received, stop current app iteration 00:07:08.664 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:08.664 spdk_app_start is called in Round 1. 00:07:08.664 Shutdown signal received, stop current app iteration 00:07:08.664 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:08.664 spdk_app_start is called in Round 2. 00:07:08.664 Shutdown signal received, stop current app iteration 00:07:08.664 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:08.664 spdk_app_start is called in Round 3. 00:07:08.664 Shutdown signal received, stop current app iteration 00:07:08.664 06:35:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.664 06:35:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:08.664 00:07:08.664 real 0m17.900s 00:07:08.664 user 0m38.860s 00:07:08.664 sys 0m3.303s 00:07:08.664 06:35:56 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.664 06:35:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.664 ************************************ 00:07:08.664 END TEST app_repeat 00:07:08.664 ************************************ 00:07:08.664 06:35:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:08.664 06:35:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.664 06:35:56 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.664 06:35:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.664 06:35:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.664 ************************************ 00:07:08.664 START TEST cpu_locks 00:07:08.664 ************************************ 00:07:08.664 06:35:56 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.664 * Looking for test storage... 00:07:08.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:08.922 06:35:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:08.922 06:35:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:08.922 06:35:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:08.922 06:35:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:08.922 06:35:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.922 06:35:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.922 06:35:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 ************************************ 00:07:08.922 START TEST default_locks 00:07:08.922 ************************************ 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=517135 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 517135 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 517135 ']' 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.922 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 [2024-07-15 06:35:56.358830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:08.922 [2024-07-15 06:35:56.358949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517135 ] 00:07:08.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.922 [2024-07-15 06:35:56.417679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.922 [2024-07-15 06:35:56.502871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.180 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.180 06:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:09.180 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 517135 00:07:09.180 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 517135 00:07:09.180 06:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.749 lslocks: write error 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 517135 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 517135 ']' 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 517135 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 517135 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 517135' 00:07:09.749 killing process with pid 517135 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 517135 00:07:09.749 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 517135 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 517135 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 517135 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 517135 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 517135 ']' 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (517135) - No such process 00:07:10.318 ERROR: process (pid: 517135) is no longer running 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.318 00:07:10.318 real 0m1.400s 00:07:10.318 user 0m1.327s 00:07:10.318 sys 0m0.563s 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.318 06:35:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.318 ************************************ 00:07:10.318 END TEST default_locks 00:07:10.318 ************************************ 00:07:10.318 06:35:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.318 06:35:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.318 06:35:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.318 06:35:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.318 ************************************ 00:07:10.318 START TEST default_locks_via_rpc 00:07:10.318 ************************************ 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=517331 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 517331 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 517331 ']' 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.318 06:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.318 [2024-07-15 06:35:57.809333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.318 [2024-07-15 06:35:57.809422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517331 ] 00:07:10.318 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.318 [2024-07-15 06:35:57.874573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.578 [2024-07-15 06:35:57.965645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 517331 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 517331 00:07:10.837 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 517331 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 517331 ']' 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 517331 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 517331 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 517331' 00:07:11.096 killing process with pid 517331 00:07:11.096 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 517331 00:07:11.097 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 517331 00:07:11.355 00:07:11.355 real 0m1.172s 00:07:11.356 user 0m1.094s 00:07:11.356 sys 0m0.539s 00:07:11.356 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.356 06:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.356 ************************************ 00:07:11.356 END TEST default_locks_via_rpc 00:07:11.356 ************************************ 00:07:11.356 06:35:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:11.356 06:35:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.356 06:35:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.356 06:35:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.615 ************************************ 00:07:11.615 START TEST non_locking_app_on_locked_coremask 00:07:11.615 ************************************ 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=517584 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 517584 /var/tmp/spdk.sock 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 517584 ']' 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.616 06:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.616 [2024-07-15 06:35:59.025484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.616 [2024-07-15 06:35:59.025577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517584 ] 00:07:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.616 [2024-07-15 06:35:59.089415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.616 [2024-07-15 06:35:59.179798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=517598 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 517598 /var/tmp/spdk2.sock 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 517598 ']' 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.874 06:35:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.133 [2024-07-15 06:35:59.487638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.133 [2024-07-15 06:35:59.487715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517598 ] 00:07:12.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.133 [2024-07-15 06:35:59.578407] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.133 [2024-07-15 06:35:59.578440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.392 [2024-07-15 06:35:59.763924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.958 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.958 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:12.958 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 517584 00:07:12.958 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 517584 00:07:12.958 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.524 lslocks: write error 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 517584 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 517584 ']' 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 517584 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 517584 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 517584' 00:07:13.524 killing process with pid 517584 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 517584 00:07:13.524 06:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 517584 00:07:14.089 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 517598 00:07:14.089 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 517598 ']' 00:07:14.089 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 517598 00:07:14.089 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 517598 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 517598' 00:07:14.347 killing process with pid 517598 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 517598 00:07:14.347 06:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 517598 00:07:14.605 00:07:14.605 real 0m3.159s 00:07:14.605 user 0m3.288s 00:07:14.605 sys 0m1.056s 00:07:14.605 06:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.605 06:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.605 ************************************ 00:07:14.605 END TEST non_locking_app_on_locked_coremask 00:07:14.605 ************************************ 00:07:14.605 06:36:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:14.605 06:36:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.605 06:36:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.605 06:36:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.605 ************************************ 00:07:14.605 START TEST locking_app_on_unlocked_coremask 00:07:14.605 ************************************ 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=518012 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 518012 /var/tmp/spdk.sock 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 518012 ']' 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.605 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.863 [2024-07-15 06:36:02.236740] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.863 [2024-07-15 06:36:02.236833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518012 ] 00:07:14.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.863 [2024-07-15 06:36:02.301416] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.863 [2024-07-15 06:36:02.301453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.863 [2024-07-15 06:36:02.398280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=518145 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 518145 /var/tmp/spdk2.sock 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 518145 ']' 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.121 06:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.121 [2024-07-15 06:36:02.703902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.121 [2024-07-15 06:36:02.704001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518145 ] 00:07:15.378 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.378 [2024-07-15 06:36:02.802754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.378 [2024-07-15 06:36:02.987514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.310 06:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.310 06:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:16.310 06:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 518145 00:07:16.310 06:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 518145 00:07:16.310 06:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.569 lslocks: write error 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 518012 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 518012 ']' 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 518012 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518012 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518012' 00:07:16.569 killing process with pid 518012 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 518012 00:07:16.569 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 518012 00:07:17.511 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 518145 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 518145 ']' 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 518145 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518145 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518145' 00:07:17.512 killing process with pid 518145 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 518145 00:07:17.512 06:36:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 518145 00:07:17.775 00:07:17.775 real 0m3.160s 00:07:17.775 user 0m3.296s 00:07:17.775 sys 0m1.069s 00:07:17.775 06:36:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.775 06:36:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.775 ************************************ 00:07:17.775 END TEST locking_app_on_unlocked_coremask 00:07:17.775 ************************************ 00:07:17.775 06:36:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:17.775 06:36:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.775 06:36:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.775 06:36:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.033 ************************************ 00:07:18.033 START TEST locking_app_on_locked_coremask 00:07:18.033 ************************************ 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=518446 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 518446 /var/tmp/spdk.sock 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 518446 ']' 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.033 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.033 [2024-07-15 06:36:05.447492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.033 [2024-07-15 06:36:05.447595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518446 ] 00:07:18.033 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.033 [2024-07-15 06:36:05.512839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.033 [2024-07-15 06:36:05.601310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=518574 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 518574 /var/tmp/spdk2.sock 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 518574 /var/tmp/spdk2.sock 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 518574 /var/tmp/spdk2.sock 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 518574 ']' 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.291 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.292 06:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.292 [2024-07-15 06:36:05.895920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.292 [2024-07-15 06:36:05.896011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518574 ] 00:07:18.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.605 [2024-07-15 06:36:05.978830] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 518446 has claimed it. 00:07:18.605 [2024-07-15 06:36:05.982923] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (518574) - No such process 00:07:19.171 ERROR: process (pid: 518574) is no longer running 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 518446 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 518446 00:07:19.171 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.430 lslocks: write error 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 518446 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 518446 ']' 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 518446 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518446 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518446' 00:07:19.430 killing process with pid 518446 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 518446 00:07:19.430 06:36:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 518446 00:07:19.997 00:07:19.997 real 0m1.913s 00:07:19.997 user 0m2.076s 00:07:19.997 sys 0m0.615s 00:07:19.997 06:36:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.997 06:36:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.997 ************************************ 00:07:19.997 END TEST locking_app_on_locked_coremask 00:07:19.997 ************************************ 00:07:19.997 06:36:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:19.997 06:36:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.997 06:36:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.997 06:36:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.997 ************************************ 00:07:19.997 START TEST locking_overlapped_coremask 00:07:19.997 ************************************ 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=518742 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 518742 /var/tmp/spdk.sock 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 518742 ']' 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.997 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.997 [2024-07-15 06:36:07.407260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.997 [2024-07-15 06:36:07.407355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518742 ] 00:07:19.997 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.997 [2024-07-15 06:36:07.466982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.997 [2024-07-15 06:36:07.557509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.997 [2024-07-15 06:36:07.557578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.997 [2024-07-15 06:36:07.557575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=518969 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 518969 /var/tmp/spdk2.sock 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 518969 /var/tmp/spdk2.sock 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 518969 /var/tmp/spdk2.sock 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 518969 ']' 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.255 06:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.255 [2024-07-15 06:36:07.854389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.255 [2024-07-15 06:36:07.854486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518969 ] 00:07:20.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.513 [2024-07-15 06:36:07.950098] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 518742 has claimed it. 00:07:20.513 [2024-07-15 06:36:07.950149] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (518969) - No such process 00:07:21.078 ERROR: process (pid: 518969) is no longer running 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 518742 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 518742 ']' 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 518742 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518742 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518742' 00:07:21.078 killing process with pid 518742 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 518742 00:07:21.078 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 518742 00:07:21.644 00:07:21.644 real 0m1.629s 00:07:21.644 user 0m4.415s 00:07:21.644 sys 0m0.478s 00:07:21.644 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.644 06:36:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.644 ************************************ 00:07:21.644 END TEST locking_overlapped_coremask 00:07:21.644 ************************************ 00:07:21.644 06:36:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:21.644 06:36:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.644 06:36:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.644 06:36:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.644 ************************************ 00:07:21.644 START TEST locking_overlapped_coremask_via_rpc 00:07:21.644 ************************************ 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=519410 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 519410 /var/tmp/spdk.sock 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 519410 ']' 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.644 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.644 [2024-07-15 06:36:09.076122] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:21.644 [2024-07-15 06:36:09.076229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519410 ] 00:07:21.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.644 [2024-07-15 06:36:09.141384] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.644 [2024-07-15 06:36:09.141429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.644 [2024-07-15 06:36:09.234001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.644 [2024-07-15 06:36:09.234062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.644 [2024-07-15 06:36:09.234065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=519548 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 519548 /var/tmp/spdk2.sock 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 519548 ']' 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.902 06:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.159 [2024-07-15 06:36:09.515850] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.159 [2024-07-15 06:36:09.515969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519548 ] 00:07:22.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.159 [2024-07-15 06:36:09.606613] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.159 [2024-07-15 06:36:09.606648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.417 [2024-07-15 06:36:09.781539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.417 [2024-07-15 06:36:09.784949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.417 [2024-07-15 06:36:09.784951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 [2024-07-15 06:36:10.479978] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 519410 has claimed it. 00:07:22.982 request: 00:07:22.982 { 00:07:22.982 "method": "framework_enable_cpumask_locks", 00:07:22.982 "req_id": 1 00:07:22.982 } 00:07:22.982 Got JSON-RPC error response 00:07:22.982 response: 00:07:22.982 { 00:07:22.982 "code": -32603, 00:07:22.982 "message": "Failed to claim CPU core: 2" 00:07:22.982 } 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 519410 /var/tmp/spdk.sock 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 519410 ']' 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.982 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 519548 /var/tmp/spdk2.sock 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 519548 ']' 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.240 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.498 00:07:23.498 real 0m1.963s 00:07:23.498 user 0m1.005s 00:07:23.498 sys 0m0.210s 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.498 06:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.498 ************************************ 00:07:23.498 END TEST locking_overlapped_coremask_via_rpc 00:07:23.498 ************************************ 00:07:23.498 06:36:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.498 06:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 519410 ]] 00:07:23.498 06:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 519410 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 519410 ']' 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 519410 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 519410 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 519410' 00:07:23.498 killing process with pid 519410 00:07:23.498 06:36:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 519410 00:07:23.499 06:36:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 519410 00:07:24.064 06:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 519548 ]] 00:07:24.064 06:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 519548 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 519548 ']' 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 519548 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 519548 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 519548' 00:07:24.064 killing process with pid 519548 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 519548 00:07:24.064 06:36:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 519548 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 519410 ]] 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 519410 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 519410 ']' 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 519410 00:07:24.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (519410) - No such process 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 519410 is not found' 00:07:24.323 Process with pid 519410 is not found 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 519548 ]] 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 519548 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 519548 ']' 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 519548 00:07:24.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (519548) - No such process 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 519548 is not found' 00:07:24.323 Process with pid 519548 is not found 00:07:24.323 06:36:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.323 00:07:24.323 real 0m15.637s 00:07:24.323 user 0m27.228s 00:07:24.323 sys 0m5.401s 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.323 06:36:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 ************************************ 00:07:24.323 END TEST cpu_locks 00:07:24.323 ************************************ 00:07:24.323 00:07:24.323 real 0m42.134s 00:07:24.323 user 1m20.669s 00:07:24.323 sys 0m9.519s 00:07:24.323 06:36:11 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.323 06:36:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 ************************************ 00:07:24.323 END TEST event 00:07:24.323 ************************************ 00:07:24.323 06:36:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.323 06:36:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.323 06:36:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.323 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 ************************************ 00:07:24.323 START TEST thread 00:07:24.323 ************************************ 00:07:24.323 06:36:11 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.581 * Looking for test storage... 00:07:24.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:24.581 06:36:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.581 06:36:11 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:24.581 06:36:11 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.581 06:36:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.581 ************************************ 00:07:24.581 START TEST thread_poller_perf 00:07:24.581 ************************************ 00:07:24.581 06:36:12 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.581 [2024-07-15 06:36:12.022824] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:24.581 [2024-07-15 06:36:12.022909] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519920 ] 00:07:24.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.581 [2024-07-15 06:36:12.086353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.581 [2024-07-15 06:36:12.176762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.581 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.030 ====================================== 00:07:26.030 busy:2713179911 (cyc) 00:07:26.030 total_run_count: 297000 00:07:26.030 tsc_hz: 2700000000 (cyc) 00:07:26.030 ====================================== 00:07:26.030 poller_cost: 9135 (cyc), 3383 (nsec) 00:07:26.030 00:07:26.030 real 0m1.257s 00:07:26.030 user 0m1.171s 00:07:26.030 sys 0m0.080s 00:07:26.030 06:36:13 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.030 06:36:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.030 ************************************ 00:07:26.030 END TEST thread_poller_perf 00:07:26.030 ************************************ 00:07:26.030 06:36:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.030 06:36:13 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:26.030 06:36:13 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.030 06:36:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.030 ************************************ 00:07:26.030 START TEST thread_poller_perf 00:07:26.030 ************************************ 00:07:26.030 06:36:13 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.030 [2024-07-15 06:36:13.331829] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.030 [2024-07-15 06:36:13.331906] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520078 ] 00:07:26.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.030 [2024-07-15 06:36:13.396118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.030 [2024-07-15 06:36:13.487909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.030 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:26.961 ====================================== 00:07:26.961 busy:2702804220 (cyc) 00:07:26.961 total_run_count: 3864000 00:07:26.961 tsc_hz: 2700000000 (cyc) 00:07:26.961 ====================================== 00:07:26.961 poller_cost: 699 (cyc), 258 (nsec) 00:07:26.961 00:07:26.961 real 0m1.254s 00:07:26.961 user 0m1.162s 00:07:26.961 sys 0m0.086s 00:07:26.961 06:36:14 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.961 06:36:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.961 ************************************ 00:07:26.961 END TEST thread_poller_perf 00:07:26.961 ************************************ 00:07:27.219 06:36:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.219 00:07:27.219 real 0m2.660s 00:07:27.219 user 0m2.394s 00:07:27.219 sys 0m0.265s 00:07:27.219 06:36:14 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.219 06:36:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.219 ************************************ 00:07:27.219 END TEST thread 00:07:27.219 ************************************ 00:07:27.219 06:36:14 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:27.219 06:36:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.219 06:36:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.219 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:07:27.219 ************************************ 00:07:27.219 START TEST accel 00:07:27.219 ************************************ 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:27.219 * Looking for test storage... 00:07:27.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:27.219 06:36:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:27.219 06:36:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:27.219 06:36:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.219 06:36:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=520290 00:07:27.219 06:36:14 accel -- accel/accel.sh@63 -- # waitforlisten 520290 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@827 -- # '[' -z 520290 ']' 00:07:27.219 06:36:14 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.219 06:36:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.219 06:36:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.219 06:36:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.219 06:36:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.219 06:36:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.219 06:36:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.219 06:36:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.219 06:36:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:27.219 06:36:14 accel -- accel/accel.sh@41 -- # jq -r . 00:07:27.219 [2024-07-15 06:36:14.745513] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:27.219 [2024-07-15 06:36:14.745604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520290 ] 00:07:27.219 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.219 [2024-07-15 06:36:14.805231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.476 [2024-07-15 06:36:14.890685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@860 -- # return 0 00:07:27.734 06:36:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:27.734 06:36:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:27.734 06:36:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:27.734 06:36:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:27.734 06:36:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:27.734 06:36:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:27.734 06:36:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.734 06:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.734 06:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.734 06:36:15 accel -- accel/accel.sh@75 -- # killprocess 520290 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@946 -- # '[' -z 520290 ']' 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@950 -- # kill -0 520290 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@951 -- # uname 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 520290 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 520290' 00:07:27.734 killing process with pid 520290 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@965 -- # kill 520290 00:07:27.734 06:36:15 accel -- common/autotest_common.sh@970 -- # wait 520290 00:07:28.299 06:36:15 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:28.299 06:36:15 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.299 06:36:15 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:28.299 06:36:15 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:28.299 06:36:15 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.299 06:36:15 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:28.299 06:36:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.299 06:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.299 ************************************ 00:07:28.299 START TEST accel_missing_filename 00:07:28.299 ************************************ 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.299 06:36:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:28.299 06:36:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:28.299 [2024-07-15 06:36:15.725604] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.299 [2024-07-15 06:36:15.725669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520443 ] 00:07:28.299 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.299 [2024-07-15 06:36:15.788167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.299 [2024-07-15 06:36:15.879173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.557 [2024-07-15 06:36:15.938974] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.557 [2024-07-15 06:36:16.014317] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:28.557 A filename is required. 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.557 00:07:28.557 real 0m0.385s 00:07:28.557 user 0m0.276s 00:07:28.557 sys 0m0.144s 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.557 06:36:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:28.557 ************************************ 00:07:28.557 END TEST accel_missing_filename 00:07:28.557 ************************************ 00:07:28.557 06:36:16 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.557 06:36:16 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:28.557 06:36:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.557 06:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.557 ************************************ 00:07:28.557 START TEST accel_compress_verify 00:07:28.557 ************************************ 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.557 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:28.557 06:36:16 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:28.557 [2024-07-15 06:36:16.158009] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.557 [2024-07-15 06:36:16.158087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520586 ] 00:07:28.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.815 [2024-07-15 06:36:16.220481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.815 [2024-07-15 06:36:16.313256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.815 [2024-07-15 06:36:16.374974] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.073 [2024-07-15 06:36:16.457543] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:29.073 00:07:29.073 Compression does not support the verify option, aborting. 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.073 00:07:29.073 real 0m0.402s 00:07:29.073 user 0m0.283s 00:07:29.073 sys 0m0.151s 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.073 06:36:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.073 ************************************ 00:07:29.073 END TEST accel_compress_verify 00:07:29.073 ************************************ 00:07:29.073 06:36:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:29.073 06:36:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.073 06:36:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.073 06:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.073 ************************************ 00:07:29.073 START TEST accel_wrong_workload 00:07:29.073 ************************************ 00:07:29.073 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:29.073 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:29.073 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:29.073 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:29.073 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:29.074 06:36:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:29.074 Unsupported workload type: foobar 00:07:29.074 [2024-07-15 06:36:16.601750] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:29.074 accel_perf options: 00:07:29.074 [-h help message] 00:07:29.074 [-q queue depth per core] 00:07:29.074 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:29.074 [-T number of threads per core 00:07:29.074 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:29.074 [-t time in seconds] 00:07:29.074 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:29.074 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:29.074 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:29.074 [-l for compress/decompress workloads, name of uncompressed input file 00:07:29.074 [-S for crc32c workload, use this seed value (default 0) 00:07:29.074 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:29.074 [-f for fill workload, use this BYTE value (default 255) 00:07:29.074 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:29.074 [-y verify result if this switch is on] 00:07:29.074 [-a tasks to allocate per core (default: same value as -q)] 00:07:29.074 Can be used to spread operations across a wider range of memory. 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.074 00:07:29.074 real 0m0.023s 00:07:29.074 user 0m0.014s 00:07:29.074 sys 0m0.009s 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.074 06:36:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:29.074 ************************************ 00:07:29.074 END TEST accel_wrong_workload 00:07:29.074 ************************************ 00:07:29.074 Error: writing output failed: Broken pipe 00:07:29.074 06:36:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:29.074 06:36:16 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:29.074 06:36:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.074 06:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.074 ************************************ 00:07:29.074 START TEST accel_negative_buffers 00:07:29.074 ************************************ 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:29.074 06:36:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:29.074 -x option must be non-negative. 00:07:29.074 [2024-07-15 06:36:16.664857] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:29.074 accel_perf options: 00:07:29.074 [-h help message] 00:07:29.074 [-q queue depth per core] 00:07:29.074 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:29.074 [-T number of threads per core 00:07:29.074 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:29.074 [-t time in seconds] 00:07:29.074 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:29.074 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:29.074 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:29.074 [-l for compress/decompress workloads, name of uncompressed input file 00:07:29.074 [-S for crc32c workload, use this seed value (default 0) 00:07:29.074 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:29.074 [-f for fill workload, use this BYTE value (default 255) 00:07:29.074 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:29.074 [-y verify result if this switch is on] 00:07:29.074 [-a tasks to allocate per core (default: same value as -q)] 00:07:29.074 Can be used to spread operations across a wider range of memory. 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.074 00:07:29.074 real 0m0.023s 00:07:29.074 user 0m0.014s 00:07:29.074 sys 0m0.009s 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.074 06:36:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:29.074 ************************************ 00:07:29.074 END TEST accel_negative_buffers 00:07:29.074 ************************************ 00:07:29.333 06:36:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:29.333 Error: writing output failed: Broken pipe 00:07:29.333 06:36:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:29.333 06:36:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.333 06:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.333 ************************************ 00:07:29.333 START TEST accel_crc32c 00:07:29.333 ************************************ 00:07:29.333 06:36:16 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:29.333 06:36:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:29.333 06:36:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:29.333 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:29.334 [2024-07-15 06:36:16.725352] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.334 [2024-07-15 06:36:16.725415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520655 ] 00:07:29.334 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.334 [2024-07-15 06:36:16.789966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.334 [2024-07-15 06:36:16.883148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.334 06:36:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.592 06:36:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:30.527 06:36:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.527 00:07:30.527 real 0m1.410s 00:07:30.527 user 0m1.263s 00:07:30.527 sys 0m0.149s 00:07:30.527 06:36:18 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.527 06:36:18 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:30.527 ************************************ 00:07:30.527 END TEST accel_crc32c 00:07:30.527 ************************************ 00:07:30.527 06:36:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:30.527 06:36:18 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:30.527 06:36:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.786 06:36:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.786 ************************************ 00:07:30.786 START TEST accel_crc32c_C2 00:07:30.786 ************************************ 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.786 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:30.787 [2024-07-15 06:36:18.177023] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:30.787 [2024-07-15 06:36:18.177091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520877 ] 00:07:30.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.787 [2024-07-15 06:36:18.240200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.787 [2024-07-15 06:36:18.331064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.787 06:36:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.161 00:07:32.161 real 0m1.388s 00:07:32.161 user 0m1.241s 00:07:32.161 sys 0m0.149s 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.161 06:36:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:32.161 ************************************ 00:07:32.161 END TEST accel_crc32c_C2 00:07:32.161 ************************************ 00:07:32.161 06:36:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:32.161 06:36:19 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.161 06:36:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.161 06:36:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.161 ************************************ 00:07:32.161 START TEST accel_copy 00:07:32.162 ************************************ 00:07:32.162 06:36:19 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:32.162 06:36:19 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:32.162 [2024-07-15 06:36:19.607369] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.162 [2024-07-15 06:36:19.607422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521080 ] 00:07:32.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.162 [2024-07-15 06:36:19.668462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.162 [2024-07-15 06:36:19.761157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.421 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.422 06:36:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:33.387 06:36:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.387 00:07:33.387 real 0m1.408s 00:07:33.387 user 0m1.263s 00:07:33.387 sys 0m0.146s 00:07:33.387 06:36:20 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.646 06:36:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 ************************************ 00:07:33.646 END TEST accel_copy 00:07:33.646 ************************************ 00:07:33.646 06:36:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.646 06:36:21 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:33.646 06:36:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.646 06:36:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 ************************************ 00:07:33.646 START TEST accel_fill 00:07:33.646 ************************************ 00:07:33.646 06:36:21 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:33.646 06:36:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:33.646 [2024-07-15 06:36:21.061299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:33.646 [2024-07-15 06:36:21.061364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521240 ] 00:07:33.646 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.646 [2024-07-15 06:36:21.123205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.646 [2024-07-15 06:36:21.215182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.905 06:36:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.838 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:34.839 06:36:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.839 00:07:34.839 real 0m1.399s 00:07:34.839 user 0m1.251s 00:07:34.839 sys 0m0.150s 00:07:34.839 06:36:22 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.839 06:36:22 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:34.839 ************************************ 00:07:34.839 END TEST accel_fill 00:07:34.839 ************************************ 00:07:35.097 06:36:22 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:35.097 06:36:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:35.097 06:36:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.097 06:36:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.097 ************************************ 00:07:35.097 START TEST accel_copy_crc32c 00:07:35.097 ************************************ 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:35.097 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:35.097 [2024-07-15 06:36:22.512858] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.097 [2024-07-15 06:36:22.512941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521398 ] 00:07:35.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.097 [2024-07-15 06:36:22.576793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.097 [2024-07-15 06:36:22.667398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 06:36:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.286 00:07:36.286 real 0m1.400s 00:07:36.286 user 0m1.258s 00:07:36.286 sys 0m0.145s 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.286 06:36:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:36.286 ************************************ 00:07:36.286 END TEST accel_copy_crc32c 00:07:36.286 ************************************ 00:07:36.543 06:36:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:36.543 06:36:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:36.543 06:36:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.543 06:36:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.543 ************************************ 00:07:36.543 START TEST accel_copy_crc32c_C2 00:07:36.543 ************************************ 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.543 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.544 06:36:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:36.544 [2024-07-15 06:36:23.958981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:36.544 [2024-07-15 06:36:23.959043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521674 ] 00:07:36.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.544 [2024-07-15 06:36:24.016534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.544 [2024-07-15 06:36:24.109548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.800 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.801 06:36:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.732 00:07:37.732 real 0m1.403s 00:07:37.732 user 0m1.257s 00:07:37.732 sys 0m0.148s 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.732 06:36:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 ************************************ 00:07:37.732 END TEST accel_copy_crc32c_C2 00:07:37.732 ************************************ 00:07:37.991 06:36:25 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:37.991 06:36:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:37.991 06:36:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.991 06:36:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.991 ************************************ 00:07:37.991 START TEST accel_dualcast 00:07:37.991 ************************************ 00:07:37.991 06:36:25 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:37.991 06:36:25 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:37.991 [2024-07-15 06:36:25.401907] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:37.991 [2024-07-15 06:36:25.401968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521828 ] 00:07:37.991 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.991 [2024-07-15 06:36:25.464760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.991 [2024-07-15 06:36:25.557752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.249 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.250 06:36:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:39.184 06:36:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.184 00:07:39.184 real 0m1.408s 00:07:39.184 user 0m1.263s 00:07:39.184 sys 0m0.146s 00:07:39.184 06:36:26 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.184 06:36:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:39.184 ************************************ 00:07:39.184 END TEST accel_dualcast 00:07:39.184 ************************************ 00:07:39.443 06:36:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:39.443 06:36:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:39.443 06:36:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.443 06:36:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.443 ************************************ 00:07:39.443 START TEST accel_compare 00:07:39.443 ************************************ 00:07:39.443 06:36:26 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:39.443 06:36:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:39.443 [2024-07-15 06:36:26.855840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:39.443 [2024-07-15 06:36:26.855912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521986 ] 00:07:39.443 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.443 [2024-07-15 06:36:26.918404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.443 [2024-07-15 06:36:27.010044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.701 06:36:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:40.635 06:36:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.635 00:07:40.635 real 0m1.405s 00:07:40.635 user 0m1.256s 00:07:40.635 sys 0m0.150s 00:07:40.635 06:36:28 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.635 06:36:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:40.635 ************************************ 00:07:40.635 END TEST accel_compare 00:07:40.635 ************************************ 00:07:40.893 06:36:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:40.893 06:36:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:40.893 06:36:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.893 06:36:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 ************************************ 00:07:40.893 START TEST accel_xor 00:07:40.893 ************************************ 00:07:40.893 06:36:28 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:40.893 06:36:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:40.893 [2024-07-15 06:36:28.299009] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:40.893 [2024-07-15 06:36:28.299062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522144 ] 00:07:40.893 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.893 [2024-07-15 06:36:28.360069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.893 [2024-07-15 06:36:28.451790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 06:36:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:42.084 06:36:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.084 00:07:42.084 real 0m1.387s 00:07:42.084 user 0m1.247s 00:07:42.084 sys 0m0.141s 00:07:42.084 06:36:29 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.084 06:36:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:42.084 ************************************ 00:07:42.084 END TEST accel_xor 00:07:42.084 ************************************ 00:07:42.084 06:36:29 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:42.084 06:36:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:42.084 06:36:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.084 06:36:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.343 ************************************ 00:07:42.343 START TEST accel_xor 00:07:42.343 ************************************ 00:07:42.343 06:36:29 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:42.343 [2024-07-15 06:36:29.731422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:42.343 [2024-07-15 06:36:29.731491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522414 ] 00:07:42.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.343 [2024-07-15 06:36:29.795213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.343 [2024-07-15 06:36:29.888595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.343 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.344 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.602 06:36:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:43.535 06:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.535 00:07:43.535 real 0m1.413s 00:07:43.535 user 0m1.254s 00:07:43.535 sys 0m0.161s 00:07:43.535 06:36:31 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.535 06:36:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:43.535 ************************************ 00:07:43.535 END TEST accel_xor 00:07:43.535 ************************************ 00:07:43.793 06:36:31 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:43.793 06:36:31 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:43.793 06:36:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.793 06:36:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.793 ************************************ 00:07:43.793 START TEST accel_dif_verify 00:07:43.793 ************************************ 00:07:43.793 06:36:31 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:43.793 06:36:31 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:43.793 [2024-07-15 06:36:31.188126] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:43.793 [2024-07-15 06:36:31.188198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522574 ] 00:07:43.793 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.793 [2024-07-15 06:36:31.252427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.793 [2024-07-15 06:36:31.345630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.051 06:36:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:44.984 06:36:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.984 00:07:44.984 real 0m1.413s 00:07:44.984 user 0m1.269s 00:07:44.984 sys 0m0.148s 00:07:44.984 06:36:32 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.984 06:36:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:44.984 ************************************ 00:07:44.984 END TEST accel_dif_verify 00:07:44.985 ************************************ 00:07:45.243 06:36:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:45.243 06:36:32 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:45.243 06:36:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.243 06:36:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 ************************************ 00:07:45.243 START TEST accel_dif_generate 00:07:45.243 ************************************ 00:07:45.243 06:36:32 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:45.243 06:36:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:45.243 [2024-07-15 06:36:32.641740] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:45.243 [2024-07-15 06:36:32.641792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522727 ] 00:07:45.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.243 [2024-07-15 06:36:32.704712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.243 [2024-07-15 06:36:32.797742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.501 06:36:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:46.433 06:36:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.433 00:07:46.433 real 0m1.405s 00:07:46.433 user 0m1.266s 00:07:46.433 sys 0m0.143s 00:07:46.433 06:36:34 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.433 06:36:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:46.433 ************************************ 00:07:46.433 END TEST accel_dif_generate 00:07:46.433 ************************************ 00:07:46.750 06:36:34 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:46.750 06:36:34 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:46.750 06:36:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.750 06:36:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.750 ************************************ 00:07:46.750 START TEST accel_dif_generate_copy 00:07:46.750 ************************************ 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:46.750 [2024-07-15 06:36:34.092276] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:46.750 [2024-07-15 06:36:34.092339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522908 ] 00:07:46.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.750 [2024-07-15 06:36:34.155335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.750 [2024-07-15 06:36:34.246563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.750 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.751 06:36:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.130 00:07:48.130 real 0m1.390s 00:07:48.130 user 0m1.251s 00:07:48.130 sys 0m0.141s 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.130 06:36:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.130 ************************************ 00:07:48.130 END TEST accel_dif_generate_copy 00:07:48.130 ************************************ 00:07:48.130 06:36:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:48.130 06:36:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.130 06:36:35 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:48.130 06:36:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.130 06:36:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.130 ************************************ 00:07:48.130 START TEST accel_comp 00:07:48.130 ************************************ 00:07:48.130 06:36:35 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:48.130 06:36:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:48.130 [2024-07-15 06:36:35.526713] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:48.130 [2024-07-15 06:36:35.526779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523162 ] 00:07:48.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.130 [2024-07-15 06:36:35.589198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.130 [2024-07-15 06:36:35.682343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.387 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.388 06:36:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:49.322 06:36:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.322 00:07:49.322 real 0m1.415s 00:07:49.322 user 0m1.266s 00:07:49.322 sys 0m0.152s 00:07:49.322 06:36:36 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.322 06:36:36 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:49.322 ************************************ 00:07:49.322 END TEST accel_comp 00:07:49.322 ************************************ 00:07:49.580 06:36:36 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:49.580 06:36:36 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:49.580 06:36:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.580 06:36:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.580 ************************************ 00:07:49.580 START TEST accel_decomp 00:07:49.580 ************************************ 00:07:49.580 06:36:36 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:49.580 06:36:36 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:49.581 06:36:36 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:49.581 [2024-07-15 06:36:36.991942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:49.581 [2024-07-15 06:36:36.992015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523321 ] 00:07:49.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.581 [2024-07-15 06:36:37.054994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.581 [2024-07-15 06:36:37.146835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.838 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 06:36:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.773 06:36:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.773 00:07:50.773 real 0m1.406s 00:07:50.773 user 0m1.267s 00:07:50.773 sys 0m0.142s 00:07:50.773 06:36:38 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.773 06:36:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:50.773 ************************************ 00:07:50.773 END TEST accel_decomp 00:07:50.773 ************************************ 00:07:51.032 06:36:38 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:51.032 06:36:38 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:51.032 06:36:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.032 06:36:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.032 ************************************ 00:07:51.032 START TEST accel_decmop_full 00:07:51.032 ************************************ 00:07:51.032 06:36:38 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:51.032 06:36:38 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:51.032 [2024-07-15 06:36:38.438816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:51.032 [2024-07-15 06:36:38.438993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523476 ] 00:07:51.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.032 [2024-07-15 06:36:38.502160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.032 [2024-07-15 06:36:38.593837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.291 06:36:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.225 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.226 06:36:39 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.226 00:07:52.226 real 0m1.415s 00:07:52.226 user 0m1.269s 00:07:52.226 sys 0m0.148s 00:07:52.226 06:36:39 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.226 06:36:39 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:52.226 ************************************ 00:07:52.226 END TEST accel_decmop_full 00:07:52.226 ************************************ 00:07:52.484 06:36:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:52.484 06:36:39 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:52.484 06:36:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.484 06:36:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.484 ************************************ 00:07:52.484 START TEST accel_decomp_mcore 00:07:52.484 ************************************ 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:52.484 06:36:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:52.484 [2024-07-15 06:36:39.897385] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:52.484 [2024-07-15 06:36:39.897447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523741 ] 00:07:52.484 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.484 [2024-07-15 06:36:39.962629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.484 [2024-07-15 06:36:40.057495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.484 [2024-07-15 06:36:40.057565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.484 [2024-07-15 06:36:40.057655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.484 [2024-07-15 06:36:40.057658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:52.742 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.743 06:36:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.676 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.677 00:07:53.677 real 0m1.398s 00:07:53.677 user 0m4.651s 00:07:53.677 sys 0m0.149s 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.677 06:36:41 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.677 ************************************ 00:07:53.677 END TEST accel_decomp_mcore 00:07:53.677 ************************************ 00:07:53.936 06:36:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.936 06:36:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:53.936 06:36:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.936 06:36:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.936 ************************************ 00:07:53.936 START TEST accel_decomp_full_mcore 00:07:53.936 ************************************ 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:53.936 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:53.936 [2024-07-15 06:36:41.350085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:53.936 [2024-07-15 06:36:41.350149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523910 ] 00:07:53.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.936 [2024-07-15 06:36:41.412849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.936 [2024-07-15 06:36:41.507739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.936 [2024-07-15 06:36:41.507796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.936 [2024-07-15 06:36:41.507914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.936 [2024-07-15 06:36:41.507917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.194 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.195 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.195 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.195 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.195 06:36:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.569 00:07:55.569 real 0m1.432s 00:07:55.569 user 0m4.775s 00:07:55.569 sys 0m0.154s 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.569 06:36:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:55.569 ************************************ 00:07:55.569 END TEST accel_decomp_full_mcore 00:07:55.569 ************************************ 00:07:55.569 06:36:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.569 06:36:42 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:55.569 06:36:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.569 06:36:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.569 ************************************ 00:07:55.569 START TEST accel_decomp_mthread 00:07:55.569 ************************************ 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.569 06:36:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.569 [2024-07-15 06:36:42.825455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:55.569 [2024-07-15 06:36:42.825523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524064 ] 00:07:55.569 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.569 [2024-07-15 06:36:42.887737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.569 [2024-07-15 06:36:42.981135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.569 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.570 06:36:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.940 00:07:56.940 real 0m1.403s 00:07:56.940 user 0m1.258s 00:07:56.940 sys 0m0.148s 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.940 06:36:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:56.940 ************************************ 00:07:56.940 END TEST accel_decomp_mthread 00:07:56.940 ************************************ 00:07:56.940 06:36:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.940 06:36:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:56.940 06:36:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.940 06:36:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.940 ************************************ 00:07:56.940 START TEST accel_decomp_full_mthread 00:07:56.940 ************************************ 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:56.940 [2024-07-15 06:36:44.276574] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:56.940 [2024-07-15 06:36:44.276640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524228 ] 00:07:56.940 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.940 [2024-07-15 06:36:44.343104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.940 [2024-07-15 06:36:44.434831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.940 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.941 06:36:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.315 00:07:58.315 real 0m1.436s 00:07:58.315 user 0m1.287s 00:07:58.315 sys 0m0.152s 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.315 06:36:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:58.315 ************************************ 00:07:58.315 END TEST accel_decomp_full_mthread 00:07:58.315 ************************************ 00:07:58.315 06:36:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:58.315 06:36:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:58.315 06:36:45 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:58.315 06:36:45 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.315 06:36:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.315 06:36:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.315 06:36:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.315 06:36:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.315 06:36:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.315 06:36:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.315 06:36:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.315 06:36:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:58.315 06:36:45 accel -- accel/accel.sh@41 -- # jq -r . 00:07:58.315 ************************************ 00:07:58.315 START TEST accel_dif_functional_tests 00:07:58.315 ************************************ 00:07:58.315 06:36:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:58.315 [2024-07-15 06:36:45.780469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:58.316 [2024-07-15 06:36:45.780528] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524499 ] 00:07:58.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.316 [2024-07-15 06:36:45.837393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.574 [2024-07-15 06:36:45.933777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.574 [2024-07-15 06:36:45.933848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.574 [2024-07-15 06:36:45.933850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.574 00:07:58.574 00:07:58.574 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.574 http://cunit.sourceforge.net/ 00:07:58.574 00:07:58.574 00:07:58.574 Suite: accel_dif 00:07:58.574 Test: verify: DIF generated, GUARD check ...passed 00:07:58.574 Test: verify: DIF generated, APPTAG check ...passed 00:07:58.574 Test: verify: DIF generated, REFTAG check ...passed 00:07:58.574 Test: verify: DIF not generated, GUARD check ...[2024-07-15 06:36:46.026234] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.574 passed 00:07:58.574 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 06:36:46.026304] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.574 passed 00:07:58.574 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 06:36:46.026335] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.574 passed 00:07:58.574 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:58.574 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 06:36:46.026392] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:58.574 passed 00:07:58.574 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:58.574 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:58.574 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:58.574 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 06:36:46.026515] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:58.574 passed 00:07:58.574 Test: verify copy: DIF generated, GUARD check ...passed 00:07:58.574 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:58.574 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:58.574 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 06:36:46.026672] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.574 passed 00:07:58.574 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 06:36:46.026708] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.574 passed 00:07:58.574 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 06:36:46.026741] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.574 passed 00:07:58.574 Test: generate copy: DIF generated, GUARD check ...passed 00:07:58.574 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:58.574 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:58.574 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:58.574 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:58.574 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:58.574 Test: generate copy: iovecs-len validate ...[2024-07-15 06:36:46.026981] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:58.574 passed 00:07:58.574 Test: generate copy: buffer alignment validate ...passed 00:07:58.574 00:07:58.574 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.574 suites 1 1 n/a 0 0 00:07:58.574 tests 26 26 26 0 0 00:07:58.574 asserts 115 115 115 0 n/a 00:07:58.574 00:07:58.574 Elapsed time = 0.003 seconds 00:07:58.833 00:07:58.833 real 0m0.503s 00:07:58.833 user 0m0.778s 00:07:58.833 sys 0m0.185s 00:07:58.833 06:36:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.833 06:36:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:58.833 ************************************ 00:07:58.833 END TEST accel_dif_functional_tests 00:07:58.833 ************************************ 00:07:58.833 00:07:58.833 real 0m31.626s 00:07:58.833 user 0m34.952s 00:07:58.833 sys 0m4.643s 00:07:58.833 06:36:46 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.833 06:36:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.833 ************************************ 00:07:58.833 END TEST accel 00:07:58.833 ************************************ 00:07:58.833 06:36:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:58.833 06:36:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.833 06:36:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.833 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:07:58.833 ************************************ 00:07:58.833 START TEST accel_rpc 00:07:58.833 ************************************ 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:58.833 * Looking for test storage... 00:07:58.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:58.833 06:36:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.833 06:36:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=524571 00:07:58.833 06:36:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.833 06:36:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 524571 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 524571 ']' 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.833 06:36:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.833 [2024-07-15 06:36:46.423259] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:58.833 [2024-07-15 06:36:46.423358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524571 ] 00:07:59.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.091 [2024-07-15 06:36:46.482893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.091 [2024-07-15 06:36:46.567071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.091 06:36:46 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.091 06:36:46 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:59.091 06:36:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:59.091 06:36:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:59.091 06:36:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:59.091 06:36:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:59.091 06:36:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:59.091 06:36:46 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.091 06:36:46 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.091 06:36:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.091 ************************************ 00:07:59.091 START TEST accel_assign_opcode 00:07:59.091 ************************************ 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.091 [2024-07-15 06:36:46.643699] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.091 [2024-07-15 06:36:46.651711] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.091 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.350 software 00:07:59.350 00:07:59.350 real 0m0.292s 00:07:59.350 user 0m0.040s 00:07:59.350 sys 0m0.007s 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.350 06:36:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.350 ************************************ 00:07:59.350 END TEST accel_assign_opcode 00:07:59.350 ************************************ 00:07:59.350 06:36:46 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 524571 00:07:59.350 06:36:46 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 524571 ']' 00:07:59.350 06:36:46 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 524571 00:07:59.350 06:36:46 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:59.350 06:36:46 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.350 06:36:46 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524571 00:07:59.608 06:36:46 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.608 06:36:46 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.608 06:36:46 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524571' 00:07:59.608 killing process with pid 524571 00:07:59.608 06:36:46 accel_rpc -- common/autotest_common.sh@965 -- # kill 524571 00:07:59.608 06:36:46 accel_rpc -- common/autotest_common.sh@970 -- # wait 524571 00:07:59.867 00:07:59.867 real 0m1.070s 00:07:59.867 user 0m0.988s 00:07:59.867 sys 0m0.426s 00:07:59.867 06:36:47 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.867 06:36:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.867 ************************************ 00:07:59.867 END TEST accel_rpc 00:07:59.867 ************************************ 00:07:59.867 06:36:47 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:59.867 06:36:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.867 06:36:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.867 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:07:59.867 ************************************ 00:07:59.867 START TEST app_cmdline 00:07:59.867 ************************************ 00:07:59.867 06:36:47 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:00.126 * Looking for test storage... 00:08:00.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.126 06:36:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:00.126 06:36:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=524775 00:08:00.126 06:36:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:00.126 06:36:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 524775 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 524775 ']' 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.126 06:36:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.126 [2024-07-15 06:36:47.546568] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:00.126 [2024-07-15 06:36:47.546669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524775 ] 00:08:00.126 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.126 [2024-07-15 06:36:47.609345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.126 [2024-07-15 06:36:47.694869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.383 06:36:47 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.383 06:36:47 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:00.383 06:36:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:00.641 { 00:08:00.641 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:00.641 "fields": { 00:08:00.641 "major": 24, 00:08:00.641 "minor": 5, 00:08:00.641 "patch": 1, 00:08:00.641 "suffix": "-pre", 00:08:00.641 "commit": "5fa2f5086" 00:08:00.641 } 00:08:00.641 } 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.641 06:36:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:00.641 06:36:48 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.898 request: 00:08:00.898 { 00:08:00.898 "method": "env_dpdk_get_mem_stats", 00:08:00.898 "req_id": 1 00:08:00.898 } 00:08:00.898 Got JSON-RPC error response 00:08:00.898 response: 00:08:00.898 { 00:08:00.898 "code": -32601, 00:08:00.898 "message": "Method not found" 00:08:00.898 } 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.898 06:36:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 524775 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 524775 ']' 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 524775 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524775 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524775' 00:08:00.898 killing process with pid 524775 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@965 -- # kill 524775 00:08:00.898 06:36:48 app_cmdline -- common/autotest_common.sh@970 -- # wait 524775 00:08:01.597 00:08:01.597 real 0m1.473s 00:08:01.597 user 0m1.791s 00:08:01.597 sys 0m0.462s 00:08:01.597 06:36:48 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.597 06:36:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 ************************************ 00:08:01.597 END TEST app_cmdline 00:08:01.597 ************************************ 00:08:01.597 06:36:48 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:01.597 06:36:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.597 06:36:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.597 06:36:48 -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 ************************************ 00:08:01.597 START TEST version 00:08:01.597 ************************************ 00:08:01.597 06:36:48 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:01.597 * Looking for test storage... 00:08:01.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:01.597 06:36:49 version -- app/version.sh@17 -- # get_header_version major 00:08:01.597 06:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.597 06:36:49 version -- app/version.sh@17 -- # major=24 00:08:01.597 06:36:49 version -- app/version.sh@18 -- # get_header_version minor 00:08:01.597 06:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.597 06:36:49 version -- app/version.sh@18 -- # minor=5 00:08:01.597 06:36:49 version -- app/version.sh@19 -- # get_header_version patch 00:08:01.597 06:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.597 06:36:49 version -- app/version.sh@19 -- # patch=1 00:08:01.597 06:36:49 version -- app/version.sh@20 -- # get_header_version suffix 00:08:01.597 06:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.597 06:36:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.597 06:36:49 version -- app/version.sh@20 -- # suffix=-pre 00:08:01.597 06:36:49 version -- app/version.sh@22 -- # version=24.5 00:08:01.597 06:36:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.597 06:36:49 version -- app/version.sh@25 -- # version=24.5.1 00:08:01.597 06:36:49 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:01.597 06:36:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:01.597 06:36:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.597 06:36:49 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:01.597 06:36:49 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:01.597 00:08:01.597 real 0m0.106s 00:08:01.597 user 0m0.050s 00:08:01.597 sys 0m0.077s 00:08:01.597 06:36:49 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.597 06:36:49 version -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 ************************************ 00:08:01.597 END TEST version 00:08:01.597 ************************************ 00:08:01.597 06:36:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@198 -- # uname -s 00:08:01.597 06:36:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:01.597 06:36:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:01.597 06:36:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:01.597 06:36:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:01.597 06:36:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.597 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 06:36:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:01.597 06:36:49 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:01.597 06:36:49 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.597 06:36:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:01.597 06:36:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.597 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 ************************************ 00:08:01.597 START TEST nvmf_tcp 00:08:01.597 ************************************ 00:08:01.597 06:36:49 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.597 * Looking for test storage... 00:08:01.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:01.597 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.856 06:36:49 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.856 06:36:49 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.856 06:36:49 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.856 06:36:49 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:01.856 06:36:49 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:01.856 06:36:49 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.856 06:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:01.856 06:36:49 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.856 06:36:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:01.856 06:36:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.856 06:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.856 ************************************ 00:08:01.856 START TEST nvmf_example 00:08:01.856 ************************************ 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.856 * Looking for test storage... 00:08:01.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:01.856 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.857 06:36:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.754 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.755 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:04.013 00:08:04.013 --- 10.0.0.2 ping statistics --- 00:08:04.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.013 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:08:04.013 00:08:04.013 --- 10.0.0.1 ping statistics --- 00:08:04.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.013 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=526792 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 526792 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 526792 ']' 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.013 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.013 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:04.271 06:36:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:04.528 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.490 Initializing NVMe Controllers 00:08:14.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.490 Initialization complete. Launching workers. 00:08:14.490 ======================================================== 00:08:14.490 Latency(us) 00:08:14.490 Device Information : IOPS MiB/s Average min max 00:08:14.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15244.51 59.55 4198.23 775.38 47235.60 00:08:14.490 ======================================================== 00:08:14.490 Total : 15244.51 59.55 4198.23 775.38 47235.60 00:08:14.490 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.490 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.490 rmmod nvme_tcp 00:08:14.490 rmmod nvme_fabrics 00:08:14.490 rmmod nvme_keyring 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 526792 ']' 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 526792 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 526792 ']' 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 526792 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 526792 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 526792' 00:08:14.749 killing process with pid 526792 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 526792 00:08:14.749 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 526792 00:08:14.749 nvmf threads initialize successfully 00:08:14.749 bdev subsystem init successfully 00:08:14.749 created a nvmf target service 00:08:14.749 create targets's poll groups done 00:08:14.749 all subsystems of target started 00:08:14.749 nvmf target is running 00:08:14.749 all subsystems of target stopped 00:08:14.749 destroy targets's poll groups done 00:08:14.749 destroyed the nvmf target service 00:08:14.749 bdev subsystem finish successfully 00:08:14.749 nvmf threads destroy successfully 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.007 06:37:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.914 00:08:16.914 real 0m15.213s 00:08:16.914 user 0m42.120s 00:08:16.914 sys 0m3.244s 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.914 06:37:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.914 ************************************ 00:08:16.914 END TEST nvmf_example 00:08:16.914 ************************************ 00:08:16.914 06:37:04 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:16.914 06:37:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.914 06:37:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.914 06:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.914 ************************************ 00:08:16.914 START TEST nvmf_filesystem 00:08:16.914 ************************************ 00:08:16.914 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:17.176 * Looking for test storage... 00:08:17.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:17.176 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:17.177 #define SPDK_CONFIG_H 00:08:17.177 #define SPDK_CONFIG_APPS 1 00:08:17.177 #define SPDK_CONFIG_ARCH native 00:08:17.177 #undef SPDK_CONFIG_ASAN 00:08:17.177 #undef SPDK_CONFIG_AVAHI 00:08:17.177 #undef SPDK_CONFIG_CET 00:08:17.177 #define SPDK_CONFIG_COVERAGE 1 00:08:17.177 #define SPDK_CONFIG_CROSS_PREFIX 00:08:17.177 #undef SPDK_CONFIG_CRYPTO 00:08:17.177 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:17.177 #undef SPDK_CONFIG_CUSTOMOCF 00:08:17.177 #undef SPDK_CONFIG_DAOS 00:08:17.177 #define SPDK_CONFIG_DAOS_DIR 00:08:17.177 #define SPDK_CONFIG_DEBUG 1 00:08:17.177 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:17.177 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.177 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:17.177 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.177 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:17.177 #undef SPDK_CONFIG_DPDK_UADK 00:08:17.177 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.177 #define SPDK_CONFIG_EXAMPLES 1 00:08:17.177 #undef SPDK_CONFIG_FC 00:08:17.177 #define SPDK_CONFIG_FC_PATH 00:08:17.177 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:17.177 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:17.177 #undef SPDK_CONFIG_FUSE 00:08:17.177 #undef SPDK_CONFIG_FUZZER 00:08:17.177 #define SPDK_CONFIG_FUZZER_LIB 00:08:17.177 #undef SPDK_CONFIG_GOLANG 00:08:17.177 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:17.177 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:17.177 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:17.177 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:17.177 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:17.177 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:17.177 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:17.177 #define SPDK_CONFIG_IDXD 1 00:08:17.177 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:17.177 #undef SPDK_CONFIG_IPSEC_MB 00:08:17.177 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:17.177 #define SPDK_CONFIG_ISAL 1 00:08:17.177 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:17.177 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:17.177 #define SPDK_CONFIG_LIBDIR 00:08:17.177 #undef SPDK_CONFIG_LTO 00:08:17.177 #define SPDK_CONFIG_MAX_LCORES 00:08:17.177 #define SPDK_CONFIG_NVME_CUSE 1 00:08:17.177 #undef SPDK_CONFIG_OCF 00:08:17.177 #define SPDK_CONFIG_OCF_PATH 00:08:17.177 #define SPDK_CONFIG_OPENSSL_PATH 00:08:17.177 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:17.177 #define SPDK_CONFIG_PGO_DIR 00:08:17.177 #undef SPDK_CONFIG_PGO_USE 00:08:17.177 #define SPDK_CONFIG_PREFIX /usr/local 00:08:17.177 #undef SPDK_CONFIG_RAID5F 00:08:17.177 #undef SPDK_CONFIG_RBD 00:08:17.177 #define SPDK_CONFIG_RDMA 1 00:08:17.177 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:17.177 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:17.177 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:17.177 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:17.177 #define SPDK_CONFIG_SHARED 1 00:08:17.177 #undef SPDK_CONFIG_SMA 00:08:17.177 #define SPDK_CONFIG_TESTS 1 00:08:17.177 #undef SPDK_CONFIG_TSAN 00:08:17.177 #define SPDK_CONFIG_UBLK 1 00:08:17.177 #define SPDK_CONFIG_UBSAN 1 00:08:17.177 #undef SPDK_CONFIG_UNIT_TESTS 00:08:17.177 #undef SPDK_CONFIG_URING 00:08:17.177 #define SPDK_CONFIG_URING_PATH 00:08:17.177 #undef SPDK_CONFIG_URING_ZNS 00:08:17.177 #undef SPDK_CONFIG_USDT 00:08:17.177 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:17.177 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:17.177 #define SPDK_CONFIG_VFIO_USER 1 00:08:17.177 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:17.177 #define SPDK_CONFIG_VHOST 1 00:08:17.177 #define SPDK_CONFIG_VIRTIO 1 00:08:17.177 #undef SPDK_CONFIG_VTUNE 00:08:17.177 #define SPDK_CONFIG_VTUNE_DIR 00:08:17.177 #define SPDK_CONFIG_WERROR 1 00:08:17.177 #define SPDK_CONFIG_WPDK_DIR 00:08:17.177 #undef SPDK_CONFIG_XNVME 00:08:17.177 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:17.177 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.178 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 528376 ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 528376 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.bYrs0F 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bYrs0F/tests/target /tmp/spdk.bYrs0F 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52999999488 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994692608 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8994693120 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:17.179 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941708288 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997344256 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390178816 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398940160 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996537344 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997348352 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=811008 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:17.180 * Looking for test storage... 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52999999488 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11209285632 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.180 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.181 06:37:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.085 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.086 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.086 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:08:19.086 00:08:19.086 --- 10.0.0.2 ping statistics --- 00:08:19.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.086 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:08:19.086 00:08:19.086 --- 10.0.0.1 ping statistics --- 00:08:19.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.086 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.086 06:37:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.345 ************************************ 00:08:19.345 START TEST nvmf_filesystem_no_in_capsule 00:08:19.345 ************************************ 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=529999 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 529999 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 529999 ']' 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.345 06:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.345 [2024-07-15 06:37:06.751511] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:19.345 [2024-07-15 06:37:06.751576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.345 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.345 [2024-07-15 06:37:06.817760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.345 [2024-07-15 06:37:06.911910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.345 [2024-07-15 06:37:06.911973] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.345 [2024-07-15 06:37:06.911999] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.345 [2024-07-15 06:37:06.912012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.345 [2024-07-15 06:37:06.912024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.345 [2024-07-15 06:37:06.912106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.345 [2024-07-15 06:37:06.912161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.345 [2024-07-15 06:37:06.912215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.345 [2024-07-15 06:37:06.912218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 [2024-07-15 06:37:07.084828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.603 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 Malloc1 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 [2024-07-15 06:37:07.266968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:19.862 { 00:08:19.862 "name": "Malloc1", 00:08:19.862 "aliases": [ 00:08:19.862 "5093e72f-26d2-4663-b8e5-2ad64cc11cee" 00:08:19.862 ], 00:08:19.862 "product_name": "Malloc disk", 00:08:19.862 "block_size": 512, 00:08:19.862 "num_blocks": 1048576, 00:08:19.862 "uuid": "5093e72f-26d2-4663-b8e5-2ad64cc11cee", 00:08:19.862 "assigned_rate_limits": { 00:08:19.862 "rw_ios_per_sec": 0, 00:08:19.862 "rw_mbytes_per_sec": 0, 00:08:19.862 "r_mbytes_per_sec": 0, 00:08:19.862 "w_mbytes_per_sec": 0 00:08:19.862 }, 00:08:19.862 "claimed": true, 00:08:19.862 "claim_type": "exclusive_write", 00:08:19.862 "zoned": false, 00:08:19.862 "supported_io_types": { 00:08:19.862 "read": true, 00:08:19.862 "write": true, 00:08:19.862 "unmap": true, 00:08:19.862 "write_zeroes": true, 00:08:19.862 "flush": true, 00:08:19.862 "reset": true, 00:08:19.862 "compare": false, 00:08:19.862 "compare_and_write": false, 00:08:19.862 "abort": true, 00:08:19.862 "nvme_admin": false, 00:08:19.862 "nvme_io": false 00:08:19.862 }, 00:08:19.862 "memory_domains": [ 00:08:19.862 { 00:08:19.862 "dma_device_id": "system", 00:08:19.862 "dma_device_type": 1 00:08:19.862 }, 00:08:19.862 { 00:08:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.862 "dma_device_type": 2 00:08:19.862 } 00:08:19.862 ], 00:08:19.862 "driver_specific": {} 00:08:19.862 } 00:08:19.862 ]' 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.862 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.427 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.427 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:20.427 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.427 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:20.427 06:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.954 06:37:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.954 06:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:23.519 06:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.896 ************************************ 00:08:24.896 START TEST filesystem_ext4 00:08:24.896 ************************************ 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:24.896 06:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:24.896 mke2fs 1.46.5 (30-Dec-2021) 00:08:24.896 Discarding device blocks: 0/522240 done 00:08:24.896 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:24.896 Filesystem UUID: 946aa4ea-0b0c-422d-9422-de2dec76545b 00:08:24.896 Superblock backups stored on blocks: 00:08:24.896 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:24.896 00:08:24.896 Allocating group tables: 0/64 done 00:08:24.896 Writing inode tables: 0/64 done 00:08:28.170 Creating journal (8192 blocks): done 00:08:28.685 Writing superblocks and filesystem accounting information: 0/64 done 00:08:28.685 00:08:28.685 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:28.686 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 529999 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.252 00:08:29.252 real 0m4.737s 00:08:29.252 user 0m0.019s 00:08:29.252 sys 0m0.064s 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:29.252 ************************************ 00:08:29.252 END TEST filesystem_ext4 00:08:29.252 ************************************ 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:29.252 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.510 ************************************ 00:08:29.510 START TEST filesystem_btrfs 00:08:29.510 ************************************ 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:29.510 06:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:29.510 btrfs-progs v6.6.2 00:08:29.510 See https://btrfs.readthedocs.io for more information. 00:08:29.510 00:08:29.510 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:29.510 NOTE: several default settings have changed in version 5.15, please make sure 00:08:29.510 this does not affect your deployments: 00:08:29.510 - DUP for metadata (-m dup) 00:08:29.510 - enabled no-holes (-O no-holes) 00:08:29.510 - enabled free-space-tree (-R free-space-tree) 00:08:29.510 00:08:29.510 Label: (null) 00:08:29.510 UUID: 80560c3c-1982-40b8-af62-e532c1e06ad4 00:08:29.510 Node size: 16384 00:08:29.510 Sector size: 4096 00:08:29.510 Filesystem size: 510.00MiB 00:08:29.511 Block group profiles: 00:08:29.511 Data: single 8.00MiB 00:08:29.511 Metadata: DUP 32.00MiB 00:08:29.511 System: DUP 8.00MiB 00:08:29.511 SSD detected: yes 00:08:29.511 Zoned device: no 00:08:29.511 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:29.511 Runtime features: free-space-tree 00:08:29.511 Checksum: crc32c 00:08:29.511 Number of devices: 1 00:08:29.511 Devices: 00:08:29.511 ID SIZE PATH 00:08:29.511 1 510.00MiB /dev/nvme0n1p1 00:08:29.511 00:08:29.511 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:29.511 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 529999 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.769 00:08:29.769 real 0m0.456s 00:08:29.769 user 0m0.015s 00:08:29.769 sys 0m0.111s 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.769 ************************************ 00:08:29.769 END TEST filesystem_btrfs 00:08:29.769 ************************************ 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.769 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.027 ************************************ 00:08:30.027 START TEST filesystem_xfs 00:08:30.027 ************************************ 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:30.027 06:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:30.027 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:30.027 = sectsz=512 attr=2, projid32bit=1 00:08:30.027 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:30.027 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:30.027 data = bsize=4096 blocks=130560, imaxpct=25 00:08:30.027 = sunit=0 swidth=0 blks 00:08:30.027 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:30.027 log =internal log bsize=4096 blocks=16384, version=2 00:08:30.027 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:30.027 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:30.958 Discarding blocks...Done. 00:08:30.958 06:37:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:30.958 06:37:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 529999 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.856 00:08:32.856 real 0m2.890s 00:08:32.856 user 0m0.015s 00:08:32.856 sys 0m0.063s 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.856 ************************************ 00:08:32.856 END TEST filesystem_xfs 00:08:32.856 ************************************ 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:32.856 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 529999 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 529999 ']' 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 529999 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 529999 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 529999' 00:08:33.114 killing process with pid 529999 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 529999 00:08:33.114 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 529999 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.372 00:08:33.372 real 0m14.251s 00:08:33.372 user 0m54.817s 00:08:33.372 sys 0m2.021s 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.372 ************************************ 00:08:33.372 END TEST nvmf_filesystem_no_in_capsule 00:08:33.372 ************************************ 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.372 06:37:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.629 ************************************ 00:08:33.629 START TEST nvmf_filesystem_in_capsule 00:08:33.629 ************************************ 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=531958 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 531958 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 531958 ']' 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.629 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.629 [2024-07-15 06:37:21.059710] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:33.629 [2024-07-15 06:37:21.059790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.629 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.629 [2024-07-15 06:37:21.129772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.629 [2024-07-15 06:37:21.222286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.629 [2024-07-15 06:37:21.222350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.629 [2024-07-15 06:37:21.222366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.629 [2024-07-15 06:37:21.222378] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.629 [2024-07-15 06:37:21.222390] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.629 [2024-07-15 06:37:21.222470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.629 [2024-07-15 06:37:21.222528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.629 [2024-07-15 06:37:21.222577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.629 [2024-07-15 06:37:21.222580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.886 [2024-07-15 06:37:21.382830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.886 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.143 Malloc1 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.143 [2024-07-15 06:37:21.568172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:34.143 { 00:08:34.143 "name": "Malloc1", 00:08:34.143 "aliases": [ 00:08:34.143 "24d1ca7b-cd90-4e11-bcd2-4f300f49494e" 00:08:34.143 ], 00:08:34.143 "product_name": "Malloc disk", 00:08:34.143 "block_size": 512, 00:08:34.143 "num_blocks": 1048576, 00:08:34.143 "uuid": "24d1ca7b-cd90-4e11-bcd2-4f300f49494e", 00:08:34.143 "assigned_rate_limits": { 00:08:34.143 "rw_ios_per_sec": 0, 00:08:34.143 "rw_mbytes_per_sec": 0, 00:08:34.143 "r_mbytes_per_sec": 0, 00:08:34.143 "w_mbytes_per_sec": 0 00:08:34.143 }, 00:08:34.143 "claimed": true, 00:08:34.143 "claim_type": "exclusive_write", 00:08:34.143 "zoned": false, 00:08:34.143 "supported_io_types": { 00:08:34.143 "read": true, 00:08:34.143 "write": true, 00:08:34.143 "unmap": true, 00:08:34.143 "write_zeroes": true, 00:08:34.143 "flush": true, 00:08:34.143 "reset": true, 00:08:34.143 "compare": false, 00:08:34.143 "compare_and_write": false, 00:08:34.143 "abort": true, 00:08:34.143 "nvme_admin": false, 00:08:34.143 "nvme_io": false 00:08:34.143 }, 00:08:34.143 "memory_domains": [ 00:08:34.143 { 00:08:34.143 "dma_device_id": "system", 00:08:34.143 "dma_device_type": 1 00:08:34.143 }, 00:08:34.143 { 00:08:34.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.143 "dma_device_type": 2 00:08:34.143 } 00:08:34.143 ], 00:08:34.143 "driver_specific": {} 00:08:34.143 } 00:08:34.143 ]' 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:34.143 06:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.706 06:37:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.706 06:37:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:34.706 06:37:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.706 06:37:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:34.706 06:37:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:37.229 06:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:37.486 06:37:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.857 ************************************ 00:08:38.857 START TEST filesystem_in_capsule_ext4 00:08:38.857 ************************************ 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:38.857 06:37:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:38.857 mke2fs 1.46.5 (30-Dec-2021) 00:08:38.857 Discarding device blocks: 0/522240 done 00:08:38.857 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:38.857 Filesystem UUID: 39f02d97-8975-41f4-9f09-b218d02d41aa 00:08:38.857 Superblock backups stored on blocks: 00:08:38.857 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:38.857 00:08:38.857 Allocating group tables: 0/64 done 00:08:38.857 Writing inode tables: 0/64 done 00:08:39.114 Creating journal (8192 blocks): done 00:08:39.943 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:08:39.943 00:08:39.943 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:39.943 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 531958 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.232 00:08:40.232 real 0m1.595s 00:08:40.232 user 0m0.012s 00:08:40.232 sys 0m0.055s 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:40.232 ************************************ 00:08:40.232 END TEST filesystem_in_capsule_ext4 00:08:40.232 ************************************ 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:40.232 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.233 ************************************ 00:08:40.233 START TEST filesystem_in_capsule_btrfs 00:08:40.233 ************************************ 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:40.233 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:40.491 btrfs-progs v6.6.2 00:08:40.491 See https://btrfs.readthedocs.io for more information. 00:08:40.491 00:08:40.491 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:40.491 NOTE: several default settings have changed in version 5.15, please make sure 00:08:40.492 this does not affect your deployments: 00:08:40.492 - DUP for metadata (-m dup) 00:08:40.492 - enabled no-holes (-O no-holes) 00:08:40.492 - enabled free-space-tree (-R free-space-tree) 00:08:40.492 00:08:40.492 Label: (null) 00:08:40.492 UUID: b2182c07-b358-42ff-9460-b5c4caf61ebb 00:08:40.492 Node size: 16384 00:08:40.492 Sector size: 4096 00:08:40.492 Filesystem size: 510.00MiB 00:08:40.492 Block group profiles: 00:08:40.492 Data: single 8.00MiB 00:08:40.492 Metadata: DUP 32.00MiB 00:08:40.492 System: DUP 8.00MiB 00:08:40.492 SSD detected: yes 00:08:40.492 Zoned device: no 00:08:40.492 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:40.492 Runtime features: free-space-tree 00:08:40.492 Checksum: crc32c 00:08:40.492 Number of devices: 1 00:08:40.492 Devices: 00:08:40.492 ID SIZE PATH 00:08:40.492 1 510.00MiB /dev/nvme0n1p1 00:08:40.492 00:08:40.492 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:40.492 06:37:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 531958 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.427 00:08:41.427 real 0m1.146s 00:08:41.427 user 0m0.010s 00:08:41.427 sys 0m0.119s 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:41.427 ************************************ 00:08:41.427 END TEST filesystem_in_capsule_btrfs 00:08:41.427 ************************************ 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.427 ************************************ 00:08:41.427 START TEST filesystem_in_capsule_xfs 00:08:41.427 ************************************ 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:41.427 06:37:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:41.427 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:41.427 = sectsz=512 attr=2, projid32bit=1 00:08:41.427 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:41.427 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:41.427 data = bsize=4096 blocks=130560, imaxpct=25 00:08:41.427 = sunit=0 swidth=0 blks 00:08:41.427 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:41.427 log =internal log bsize=4096 blocks=16384, version=2 00:08:41.427 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:41.427 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:42.362 Discarding blocks...Done. 00:08:42.362 06:37:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:42.362 06:37:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 531958 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.261 00:08:44.261 real 0m2.882s 00:08:44.261 user 0m0.013s 00:08:44.261 sys 0m0.064s 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.261 ************************************ 00:08:44.261 END TEST filesystem_in_capsule_xfs 00:08:44.261 ************************************ 00:08:44.261 06:37:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.519 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:44.519 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 531958 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 531958 ']' 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 531958 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:44.777 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 531958 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 531958' 00:08:44.778 killing process with pid 531958 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 531958 00:08:44.778 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 531958 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:45.344 00:08:45.344 real 0m11.678s 00:08:45.344 user 0m44.833s 00:08:45.344 sys 0m1.689s 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.344 ************************************ 00:08:45.344 END TEST nvmf_filesystem_in_capsule 00:08:45.344 ************************************ 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:45.344 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.345 rmmod nvme_tcp 00:08:45.345 rmmod nvme_fabrics 00:08:45.345 rmmod nvme_keyring 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.345 06:37:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.245 06:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.245 00:08:47.245 real 0m30.319s 00:08:47.245 user 1m40.506s 00:08:47.245 sys 0m5.237s 00:08:47.245 06:37:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.245 06:37:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.245 ************************************ 00:08:47.245 END TEST nvmf_filesystem 00:08:47.245 ************************************ 00:08:47.245 06:37:34 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:47.245 06:37:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:47.245 06:37:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.245 06:37:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.245 ************************************ 00:08:47.245 START TEST nvmf_target_discovery 00:08:47.245 ************************************ 00:08:47.245 06:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:47.504 * Looking for test storage... 00:08:47.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.504 06:37:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.505 06:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.407 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:08:49.408 00:08:49.408 --- 10.0.0.2 ping statistics --- 00:08:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.408 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:49.408 00:08:49.408 --- 10.0.0.1 ping statistics --- 00:08:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.408 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=535442 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 535442 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 535442 ']' 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:49.408 06:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.667 [2024-07-15 06:37:37.040001] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:49.667 [2024-07-15 06:37:37.040099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.667 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.667 [2024-07-15 06:37:37.112917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.667 [2024-07-15 06:37:37.206623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.667 [2024-07-15 06:37:37.206685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.667 [2024-07-15 06:37:37.206701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.667 [2024-07-15 06:37:37.206714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.667 [2024-07-15 06:37:37.206725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.667 [2024-07-15 06:37:37.206802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.667 [2024-07-15 06:37:37.206870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.667 [2024-07-15 06:37:37.206918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.667 [2024-07-15 06:37:37.206922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.925 [2024-07-15 06:37:37.367810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.925 Null1 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.925 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 [2024-07-15 06:37:37.408174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 Null2 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 Null3 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 Null4 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.926 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:50.184 00:08:50.184 Discovery Log Number of Records 6, Generation counter 6 00:08:50.184 =====Discovery Log Entry 0====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: current discovery subsystem 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4420 00:08:50.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: explicit discovery connections, duplicate discovery information 00:08:50.184 sectype: none 00:08:50.184 =====Discovery Log Entry 1====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: nvme subsystem 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4420 00:08:50.184 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: none 00:08:50.184 sectype: none 00:08:50.184 =====Discovery Log Entry 2====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: nvme subsystem 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4420 00:08:50.184 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: none 00:08:50.184 sectype: none 00:08:50.184 =====Discovery Log Entry 3====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: nvme subsystem 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4420 00:08:50.184 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: none 00:08:50.184 sectype: none 00:08:50.184 =====Discovery Log Entry 4====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: nvme subsystem 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4420 00:08:50.184 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: none 00:08:50.184 sectype: none 00:08:50.184 =====Discovery Log Entry 5====== 00:08:50.184 trtype: tcp 00:08:50.184 adrfam: ipv4 00:08:50.184 subtype: discovery subsystem referral 00:08:50.184 treq: not required 00:08:50.184 portid: 0 00:08:50.184 trsvcid: 4430 00:08:50.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.184 traddr: 10.0.0.2 00:08:50.184 eflags: none 00:08:50.184 sectype: none 00:08:50.184 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:50.184 Perform nvmf subsystem discovery via RPC 00:08:50.184 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:50.184 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.184 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.184 [ 00:08:50.184 { 00:08:50.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:50.184 "subtype": "Discovery", 00:08:50.184 "listen_addresses": [ 00:08:50.184 { 00:08:50.184 "trtype": "TCP", 00:08:50.184 "adrfam": "IPv4", 00:08:50.184 "traddr": "10.0.0.2", 00:08:50.184 "trsvcid": "4420" 00:08:50.184 } 00:08:50.184 ], 00:08:50.184 "allow_any_host": true, 00:08:50.184 "hosts": [] 00:08:50.184 }, 00:08:50.184 { 00:08:50.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.184 "subtype": "NVMe", 00:08:50.184 "listen_addresses": [ 00:08:50.184 { 00:08:50.184 "trtype": "TCP", 00:08:50.184 "adrfam": "IPv4", 00:08:50.184 "traddr": "10.0.0.2", 00:08:50.184 "trsvcid": "4420" 00:08:50.184 } 00:08:50.184 ], 00:08:50.184 "allow_any_host": true, 00:08:50.184 "hosts": [], 00:08:50.184 "serial_number": "SPDK00000000000001", 00:08:50.184 "model_number": "SPDK bdev Controller", 00:08:50.184 "max_namespaces": 32, 00:08:50.184 "min_cntlid": 1, 00:08:50.184 "max_cntlid": 65519, 00:08:50.184 "namespaces": [ 00:08:50.184 { 00:08:50.184 "nsid": 1, 00:08:50.184 "bdev_name": "Null1", 00:08:50.184 "name": "Null1", 00:08:50.184 "nguid": "59EB19F0B53148DF8D9594D5427A5BAD", 00:08:50.184 "uuid": "59eb19f0-b531-48df-8d95-94d5427a5bad" 00:08:50.184 } 00:08:50.184 ] 00:08:50.184 }, 00:08:50.184 { 00:08:50.184 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.184 "subtype": "NVMe", 00:08:50.184 "listen_addresses": [ 00:08:50.184 { 00:08:50.184 "trtype": "TCP", 00:08:50.184 "adrfam": "IPv4", 00:08:50.184 "traddr": "10.0.0.2", 00:08:50.184 "trsvcid": "4420" 00:08:50.184 } 00:08:50.184 ], 00:08:50.184 "allow_any_host": true, 00:08:50.184 "hosts": [], 00:08:50.184 "serial_number": "SPDK00000000000002", 00:08:50.184 "model_number": "SPDK bdev Controller", 00:08:50.184 "max_namespaces": 32, 00:08:50.184 "min_cntlid": 1, 00:08:50.184 "max_cntlid": 65519, 00:08:50.184 "namespaces": [ 00:08:50.184 { 00:08:50.184 "nsid": 1, 00:08:50.184 "bdev_name": "Null2", 00:08:50.184 "name": "Null2", 00:08:50.184 "nguid": "ABF797D35A1C4AE893EBDDCEFDFB5840", 00:08:50.184 "uuid": "abf797d3-5a1c-4ae8-93eb-ddcefdfb5840" 00:08:50.184 } 00:08:50.184 ] 00:08:50.184 }, 00:08:50.184 { 00:08:50.184 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:50.184 "subtype": "NVMe", 00:08:50.184 "listen_addresses": [ 00:08:50.184 { 00:08:50.184 "trtype": "TCP", 00:08:50.184 "adrfam": "IPv4", 00:08:50.184 "traddr": "10.0.0.2", 00:08:50.184 "trsvcid": "4420" 00:08:50.184 } 00:08:50.184 ], 00:08:50.184 "allow_any_host": true, 00:08:50.184 "hosts": [], 00:08:50.184 "serial_number": "SPDK00000000000003", 00:08:50.184 "model_number": "SPDK bdev Controller", 00:08:50.184 "max_namespaces": 32, 00:08:50.184 "min_cntlid": 1, 00:08:50.184 "max_cntlid": 65519, 00:08:50.184 "namespaces": [ 00:08:50.184 { 00:08:50.184 "nsid": 1, 00:08:50.184 "bdev_name": "Null3", 00:08:50.184 "name": "Null3", 00:08:50.184 "nguid": "8C78D879142E479BA25F713B10FE6153", 00:08:50.184 "uuid": "8c78d879-142e-479b-a25f-713b10fe6153" 00:08:50.184 } 00:08:50.184 ] 00:08:50.184 }, 00:08:50.184 { 00:08:50.184 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:50.184 "subtype": "NVMe", 00:08:50.184 "listen_addresses": [ 00:08:50.184 { 00:08:50.184 "trtype": "TCP", 00:08:50.184 "adrfam": "IPv4", 00:08:50.184 "traddr": "10.0.0.2", 00:08:50.184 "trsvcid": "4420" 00:08:50.184 } 00:08:50.184 ], 00:08:50.184 "allow_any_host": true, 00:08:50.184 "hosts": [], 00:08:50.184 "serial_number": "SPDK00000000000004", 00:08:50.184 "model_number": "SPDK bdev Controller", 00:08:50.184 "max_namespaces": 32, 00:08:50.184 "min_cntlid": 1, 00:08:50.184 "max_cntlid": 65519, 00:08:50.184 "namespaces": [ 00:08:50.184 { 00:08:50.184 "nsid": 1, 00:08:50.184 "bdev_name": "Null4", 00:08:50.184 "name": "Null4", 00:08:50.184 "nguid": "74A69865B5C64D539BE6D10397FAEEAE", 00:08:50.184 "uuid": "74a69865-b5c6-4d53-9be6-d10397faeeae" 00:08:50.184 } 00:08:50.184 ] 00:08:50.184 } 00:08:50.184 ] 00:08:50.184 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.185 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.443 rmmod nvme_tcp 00:08:50.443 rmmod nvme_fabrics 00:08:50.443 rmmod nvme_keyring 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 535442 ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 535442 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 535442 ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 535442 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 535442 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 535442' 00:08:50.443 killing process with pid 535442 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 535442 00:08:50.443 06:37:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 535442 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.700 06:37:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.608 06:37:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.608 00:08:52.608 real 0m5.361s 00:08:52.608 user 0m4.595s 00:08:52.608 sys 0m1.748s 00:08:52.608 06:37:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.608 06:37:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:52.608 ************************************ 00:08:52.608 END TEST nvmf_target_discovery 00:08:52.608 ************************************ 00:08:52.866 06:37:40 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:52.866 06:37:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:52.866 06:37:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.866 06:37:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 ************************************ 00:08:52.866 START TEST nvmf_referrals 00:08:52.866 ************************************ 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:52.866 * Looking for test storage... 00:08:52.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.866 06:37:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.867 06:37:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.867 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.867 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.867 06:37:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.867 06:37:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.395 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:55.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:55.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:55.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:55.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:55.396 00:08:55.396 --- 10.0.0.2 ping statistics --- 00:08:55.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.396 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:55.396 00:08:55.396 --- 10.0.0.1 ping statistics --- 00:08:55.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.396 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=537523 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 537523 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 537523 ']' 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:55.396 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.396 [2024-07-15 06:37:42.602203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:55.396 [2024-07-15 06:37:42.602284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.396 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.396 [2024-07-15 06:37:42.672283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.397 [2024-07-15 06:37:42.769171] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.397 [2024-07-15 06:37:42.769232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.397 [2024-07-15 06:37:42.769249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.397 [2024-07-15 06:37:42.769262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.397 [2024-07-15 06:37:42.769274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.397 [2024-07-15 06:37:42.769356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.397 [2024-07-15 06:37:42.769414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.397 [2024-07-15 06:37:42.769473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.397 [2024-07-15 06:37:42.769476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 [2024-07-15 06:37:42.933849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 [2024-07-15 06:37:42.946154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 06:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.655 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.912 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.169 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.433 06:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.710 rmmod nvme_tcp 00:08:56.710 rmmod nvme_fabrics 00:08:56.710 rmmod nvme_keyring 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 537523 ']' 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 537523 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 537523 ']' 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 537523 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:56.710 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 537523 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 537523' 00:08:56.969 killing process with pid 537523 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 537523 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 537523 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.969 06:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.503 06:37:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.503 00:08:59.503 real 0m6.341s 00:08:59.503 user 0m8.444s 00:08:59.503 sys 0m2.194s 00:08:59.503 06:37:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.503 06:37:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.503 ************************************ 00:08:59.503 END TEST nvmf_referrals 00:08:59.503 ************************************ 00:08:59.503 06:37:46 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:59.503 06:37:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:59.503 06:37:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.503 06:37:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.503 ************************************ 00:08:59.503 START TEST nvmf_connect_disconnect 00:08:59.503 ************************************ 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:59.503 * Looking for test storage... 00:08:59.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.503 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.504 06:37:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:01.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:01.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:01.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:01.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:01.406 00:09:01.406 --- 10.0.0.2 ping statistics --- 00:09:01.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.406 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:01.406 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:09:01.406 00:09:01.406 --- 10.0.0.1 ping statistics --- 00:09:01.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.407 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=539702 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 539702 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 539702 ']' 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:01.407 06:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.407 [2024-07-15 06:37:48.903997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:01.407 [2024-07-15 06:37:48.904087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.407 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.407 [2024-07-15 06:37:48.975334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.665 [2024-07-15 06:37:49.069867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.665 [2024-07-15 06:37:49.069935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.665 [2024-07-15 06:37:49.069952] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.665 [2024-07-15 06:37:49.069965] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.665 [2024-07-15 06:37:49.069976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.665 [2024-07-15 06:37:49.070066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.665 [2024-07-15 06:37:49.070120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.665 [2024-07-15 06:37:49.070171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.665 [2024-07-15 06:37:49.070174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 [2024-07-15 06:37:49.224607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.665 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 [2024-07-15 06:37:49.276499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.922 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.922 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:01.922 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:01.922 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:01.922 06:37:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:04.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.443 [2024-07-15 06:41:18.823688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184fd40 is same with the state(5) to be set 00:12:31.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.389 rmmod nvme_tcp 00:12:52.389 rmmod nvme_fabrics 00:12:52.389 rmmod nvme_keyring 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 539702 ']' 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 539702 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 539702 ']' 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 539702 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 539702 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 539702' 00:12:52.389 killing process with pid 539702 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 539702 00:12:52.389 06:41:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 539702 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.648 06:41:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.182 06:41:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.182 00:12:55.182 real 3m55.608s 00:12:55.182 user 14m57.423s 00:12:55.182 sys 0m34.440s 00:12:55.182 06:41:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.182 06:41:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.182 ************************************ 00:12:55.182 END TEST nvmf_connect_disconnect 00:12:55.182 ************************************ 00:12:55.182 06:41:42 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:55.182 06:41:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:55.182 06:41:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.182 06:41:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.182 ************************************ 00:12:55.182 START TEST nvmf_multitarget 00:12:55.182 ************************************ 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:55.182 * Looking for test storage... 00:12:55.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.182 06:41:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.183 06:41:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.081 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:57.082 00:12:57.082 --- 10.0.0.2 ping statistics --- 00:12:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.082 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:57.082 00:12:57.082 --- 10.0.0.1 ping statistics --- 00:12:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.082 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=570785 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 570785 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 570785 ']' 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.082 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.082 [2024-07-15 06:41:44.610723] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:57.082 [2024-07-15 06:41:44.610809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.082 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.082 [2024-07-15 06:41:44.675274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.339 [2024-07-15 06:41:44.761992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.339 [2024-07-15 06:41:44.762055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.339 [2024-07-15 06:41:44.762068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.339 [2024-07-15 06:41:44.762079] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.339 [2024-07-15 06:41:44.762089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.339 [2024-07-15 06:41:44.762142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.339 [2024-07-15 06:41:44.762204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.340 [2024-07-15 06:41:44.762269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.340 [2024-07-15 06:41:44.762272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.340 06:41:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:57.596 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:57.596 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:57.596 "nvmf_tgt_1" 00:12:57.596 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:57.854 "nvmf_tgt_2" 00:12:57.854 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.854 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:57.854 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:57.854 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:58.112 true 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:58.112 true 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.112 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.112 rmmod nvme_tcp 00:12:58.370 rmmod nvme_fabrics 00:12:58.370 rmmod nvme_keyring 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 570785 ']' 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 570785 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 570785 ']' 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 570785 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 570785 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 570785' 00:12:58.370 killing process with pid 570785 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 570785 00:12:58.370 06:41:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 570785 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.631 06:41:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.557 06:41:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.557 00:13:00.557 real 0m5.724s 00:13:00.557 user 0m6.444s 00:13:00.557 sys 0m1.890s 00:13:00.557 06:41:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.557 06:41:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 ************************************ 00:13:00.557 END TEST nvmf_multitarget 00:13:00.557 ************************************ 00:13:00.557 06:41:48 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.557 06:41:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.557 06:41:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.557 06:41:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 ************************************ 00:13:00.557 START TEST nvmf_rpc 00:13:00.557 ************************************ 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.557 * Looking for test storage... 00:13:00.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.557 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.558 06:41:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.092 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.093 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.093 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:03.093 00:13:03.093 --- 10.0.0.2 ping statistics --- 00:13:03.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.093 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:13:03.093 00:13:03.093 --- 10.0.0.1 ping statistics --- 00:13:03.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.093 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=572880 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 572880 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 572880 ']' 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.093 [2024-07-15 06:41:50.329434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:03.093 [2024-07-15 06:41:50.329511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.093 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.093 [2024-07-15 06:41:50.395435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.093 [2024-07-15 06:41:50.483813] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.093 [2024-07-15 06:41:50.483869] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.093 [2024-07-15 06:41:50.483889] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.093 [2024-07-15 06:41:50.483901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.093 [2024-07-15 06:41:50.483911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.093 [2024-07-15 06:41:50.483959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.093 [2024-07-15 06:41:50.484017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.093 [2024-07-15 06:41:50.484084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.093 [2024-07-15 06:41:50.484086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.093 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:03.093 "tick_rate": 2700000000, 00:13:03.093 "poll_groups": [ 00:13:03.093 { 00:13:03.093 "name": "nvmf_tgt_poll_group_000", 00:13:03.093 "admin_qpairs": 0, 00:13:03.093 "io_qpairs": 0, 00:13:03.093 "current_admin_qpairs": 0, 00:13:03.093 "current_io_qpairs": 0, 00:13:03.093 "pending_bdev_io": 0, 00:13:03.093 "completed_nvme_io": 0, 00:13:03.093 "transports": [] 00:13:03.093 }, 00:13:03.093 { 00:13:03.093 "name": "nvmf_tgt_poll_group_001", 00:13:03.093 "admin_qpairs": 0, 00:13:03.093 "io_qpairs": 0, 00:13:03.093 "current_admin_qpairs": 0, 00:13:03.093 "current_io_qpairs": 0, 00:13:03.093 "pending_bdev_io": 0, 00:13:03.093 "completed_nvme_io": 0, 00:13:03.093 "transports": [] 00:13:03.093 }, 00:13:03.093 { 00:13:03.093 "name": "nvmf_tgt_poll_group_002", 00:13:03.093 "admin_qpairs": 0, 00:13:03.093 "io_qpairs": 0, 00:13:03.093 "current_admin_qpairs": 0, 00:13:03.093 "current_io_qpairs": 0, 00:13:03.093 "pending_bdev_io": 0, 00:13:03.093 "completed_nvme_io": 0, 00:13:03.093 "transports": [] 00:13:03.093 }, 00:13:03.093 { 00:13:03.093 "name": "nvmf_tgt_poll_group_003", 00:13:03.093 "admin_qpairs": 0, 00:13:03.093 "io_qpairs": 0, 00:13:03.093 "current_admin_qpairs": 0, 00:13:03.093 "current_io_qpairs": 0, 00:13:03.093 "pending_bdev_io": 0, 00:13:03.093 "completed_nvme_io": 0, 00:13:03.093 "transports": [] 00:13:03.094 } 00:13:03.094 ] 00:13:03.094 }' 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:03.094 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 [2024-07-15 06:41:50.735036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:03.352 "tick_rate": 2700000000, 00:13:03.352 "poll_groups": [ 00:13:03.352 { 00:13:03.352 "name": "nvmf_tgt_poll_group_000", 00:13:03.352 "admin_qpairs": 0, 00:13:03.352 "io_qpairs": 0, 00:13:03.352 "current_admin_qpairs": 0, 00:13:03.352 "current_io_qpairs": 0, 00:13:03.352 "pending_bdev_io": 0, 00:13:03.352 "completed_nvme_io": 0, 00:13:03.352 "transports": [ 00:13:03.352 { 00:13:03.352 "trtype": "TCP" 00:13:03.352 } 00:13:03.352 ] 00:13:03.352 }, 00:13:03.352 { 00:13:03.352 "name": "nvmf_tgt_poll_group_001", 00:13:03.352 "admin_qpairs": 0, 00:13:03.352 "io_qpairs": 0, 00:13:03.352 "current_admin_qpairs": 0, 00:13:03.352 "current_io_qpairs": 0, 00:13:03.352 "pending_bdev_io": 0, 00:13:03.352 "completed_nvme_io": 0, 00:13:03.352 "transports": [ 00:13:03.352 { 00:13:03.352 "trtype": "TCP" 00:13:03.352 } 00:13:03.352 ] 00:13:03.352 }, 00:13:03.352 { 00:13:03.352 "name": "nvmf_tgt_poll_group_002", 00:13:03.352 "admin_qpairs": 0, 00:13:03.352 "io_qpairs": 0, 00:13:03.352 "current_admin_qpairs": 0, 00:13:03.352 "current_io_qpairs": 0, 00:13:03.352 "pending_bdev_io": 0, 00:13:03.352 "completed_nvme_io": 0, 00:13:03.352 "transports": [ 00:13:03.352 { 00:13:03.352 "trtype": "TCP" 00:13:03.352 } 00:13:03.352 ] 00:13:03.352 }, 00:13:03.352 { 00:13:03.352 "name": "nvmf_tgt_poll_group_003", 00:13:03.352 "admin_qpairs": 0, 00:13:03.352 "io_qpairs": 0, 00:13:03.352 "current_admin_qpairs": 0, 00:13:03.352 "current_io_qpairs": 0, 00:13:03.352 "pending_bdev_io": 0, 00:13:03.352 "completed_nvme_io": 0, 00:13:03.352 "transports": [ 00:13:03.352 { 00:13:03.352 "trtype": "TCP" 00:13:03.352 } 00:13:03.352 ] 00:13:03.352 } 00:13:03.352 ] 00:13:03.352 }' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 Malloc1 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.352 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.353 [2024-07-15 06:41:50.896279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.353 [2024-07-15 06:41:50.918764] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:03.353 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.353 could not add new controller: failed to write to nvme-fabrics device 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.353 06:41:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.284 06:41:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.284 06:41:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:04.284 06:41:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.284 06:41:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:04.284 06:41:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.181 [2024-07-15 06:41:53.768480] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:06.181 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.181 could not add new controller: failed to write to nvme-fabrics device 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:06.181 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.439 06:41:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.004 06:41:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.004 06:41:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:07.004 06:41:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.004 06:41:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:07.004 06:41:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:08.897 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.156 [2024-07-15 06:41:56.590556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.156 06:41:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.721 06:41:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.721 06:41:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:09.721 06:41:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.721 06:41:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:09.721 06:41:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 [2024-07-15 06:41:59.405934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.246 06:41:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.504 06:42:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.504 06:42:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:12.504 06:42:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.504 06:42:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:12.504 06:42:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 [2024-07-15 06:42:02.187271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.289 06:42:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.289 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:15.289 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.289 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:15.289 06:42:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.832 [2024-07-15 06:42:04.928358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.832 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.833 06:42:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.090 06:42:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.090 06:42:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:18.090 06:42:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.090 06:42:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:18.090 06:42:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 [2024-07-15 06:42:07.787669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.622 06:42:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.187 06:42:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.187 06:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:21.187 06:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.187 06:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:21.187 06:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 [2024-07-15 06:42:10.654007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 [2024-07-15 06:42:10.702088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.102 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 [2024-07-15 06:42:10.750290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 [2024-07-15 06:42:10.798432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 [2024-07-15 06:42:10.846596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:23.361 "tick_rate": 2700000000, 00:13:23.361 "poll_groups": [ 00:13:23.361 { 00:13:23.361 "name": "nvmf_tgt_poll_group_000", 00:13:23.361 "admin_qpairs": 2, 00:13:23.361 "io_qpairs": 84, 00:13:23.361 "current_admin_qpairs": 0, 00:13:23.361 "current_io_qpairs": 0, 00:13:23.361 "pending_bdev_io": 0, 00:13:23.361 "completed_nvme_io": 100, 00:13:23.361 "transports": [ 00:13:23.361 { 00:13:23.361 "trtype": "TCP" 00:13:23.361 } 00:13:23.361 ] 00:13:23.361 }, 00:13:23.361 { 00:13:23.361 "name": "nvmf_tgt_poll_group_001", 00:13:23.361 "admin_qpairs": 2, 00:13:23.361 "io_qpairs": 84, 00:13:23.361 "current_admin_qpairs": 0, 00:13:23.361 "current_io_qpairs": 0, 00:13:23.361 "pending_bdev_io": 0, 00:13:23.361 "completed_nvme_io": 134, 00:13:23.361 "transports": [ 00:13:23.361 { 00:13:23.361 "trtype": "TCP" 00:13:23.361 } 00:13:23.361 ] 00:13:23.361 }, 00:13:23.361 { 00:13:23.361 "name": "nvmf_tgt_poll_group_002", 00:13:23.361 "admin_qpairs": 1, 00:13:23.361 "io_qpairs": 84, 00:13:23.361 "current_admin_qpairs": 0, 00:13:23.361 "current_io_qpairs": 0, 00:13:23.361 "pending_bdev_io": 0, 00:13:23.361 "completed_nvme_io": 283, 00:13:23.361 "transports": [ 00:13:23.361 { 00:13:23.361 "trtype": "TCP" 00:13:23.361 } 00:13:23.361 ] 00:13:23.361 }, 00:13:23.361 { 00:13:23.361 "name": "nvmf_tgt_poll_group_003", 00:13:23.361 "admin_qpairs": 2, 00:13:23.361 "io_qpairs": 84, 00:13:23.361 "current_admin_qpairs": 0, 00:13:23.361 "current_io_qpairs": 0, 00:13:23.361 "pending_bdev_io": 0, 00:13:23.361 "completed_nvme_io": 169, 00:13:23.361 "transports": [ 00:13:23.361 { 00:13:23.361 "trtype": "TCP" 00:13:23.361 } 00:13:23.361 ] 00:13:23.361 } 00:13:23.361 ] 00:13:23.361 }' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.361 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.362 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:23.362 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.362 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:23.362 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.362 06:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.362 rmmod nvme_tcp 00:13:23.619 rmmod nvme_fabrics 00:13:23.619 rmmod nvme_keyring 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 572880 ']' 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 572880 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 572880 ']' 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 572880 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 572880 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 572880' 00:13:23.619 killing process with pid 572880 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 572880 00:13:23.619 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 572880 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.878 06:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.791 06:42:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.791 00:13:25.791 real 0m25.262s 00:13:25.791 user 1m22.491s 00:13:25.791 sys 0m3.987s 00:13:25.791 06:42:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.791 06:42:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 ************************************ 00:13:25.791 END TEST nvmf_rpc 00:13:25.791 ************************************ 00:13:25.791 06:42:13 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.791 06:42:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:25.791 06:42:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.791 06:42:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.054 ************************************ 00:13:26.054 START TEST nvmf_invalid 00:13:26.054 ************************************ 00:13:26.054 06:42:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.055 * Looking for test storage... 00:13:26.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.055 06:42:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.955 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:13:27.956 00:13:27.956 --- 10.0.0.2 ping statistics --- 00:13:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.956 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:13:27.956 00:13:27.956 --- 10.0.0.1 ping statistics --- 00:13:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.956 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=577987 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 577987 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 577987 ']' 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:27.956 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.956 [2024-07-15 06:42:15.546849] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:27.956 [2024-07-15 06:42:15.546950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.214 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.214 [2024-07-15 06:42:15.621298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.214 [2024-07-15 06:42:15.712236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.214 [2024-07-15 06:42:15.712306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.214 [2024-07-15 06:42:15.712323] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.214 [2024-07-15 06:42:15.712336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.214 [2024-07-15 06:42:15.712347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.214 [2024-07-15 06:42:15.712427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.214 [2024-07-15 06:42:15.712484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.214 [2024-07-15 06:42:15.712602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.214 [2024-07-15 06:42:15.712604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.472 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:28.472 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.473 06:42:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4111 00:13:28.730 [2024-07-15 06:42:16.148548] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:28.730 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:28.730 { 00:13:28.730 "nqn": "nqn.2016-06.io.spdk:cnode4111", 00:13:28.730 "tgt_name": "foobar", 00:13:28.730 "method": "nvmf_create_subsystem", 00:13:28.730 "req_id": 1 00:13:28.730 } 00:13:28.730 Got JSON-RPC error response 00:13:28.730 response: 00:13:28.730 { 00:13:28.730 "code": -32603, 00:13:28.730 "message": "Unable to find target foobar" 00:13:28.730 }' 00:13:28.730 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:28.730 { 00:13:28.730 "nqn": "nqn.2016-06.io.spdk:cnode4111", 00:13:28.730 "tgt_name": "foobar", 00:13:28.730 "method": "nvmf_create_subsystem", 00:13:28.730 "req_id": 1 00:13:28.730 } 00:13:28.730 Got JSON-RPC error response 00:13:28.730 response: 00:13:28.730 { 00:13:28.730 "code": -32603, 00:13:28.730 "message": "Unable to find target foobar" 00:13:28.730 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:28.730 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:28.730 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27001 00:13:28.987 [2024-07-15 06:42:16.417482] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27001: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:28.987 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:28.987 { 00:13:28.987 "nqn": "nqn.2016-06.io.spdk:cnode27001", 00:13:28.987 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.987 "method": "nvmf_create_subsystem", 00:13:28.987 "req_id": 1 00:13:28.987 } 00:13:28.987 Got JSON-RPC error response 00:13:28.987 response: 00:13:28.987 { 00:13:28.987 "code": -32602, 00:13:28.987 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.987 }' 00:13:28.987 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:28.987 { 00:13:28.987 "nqn": "nqn.2016-06.io.spdk:cnode27001", 00:13:28.987 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.987 "method": "nvmf_create_subsystem", 00:13:28.987 "req_id": 1 00:13:28.987 } 00:13:28.987 Got JSON-RPC error response 00:13:28.987 response: 00:13:28.987 { 00:13:28.987 "code": -32602, 00:13:28.987 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.987 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.987 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:28.987 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18847 00:13:29.245 [2024-07-15 06:42:16.670296] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18847: invalid model number 'SPDK_Controller' 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:29.245 { 00:13:29.245 "nqn": "nqn.2016-06.io.spdk:cnode18847", 00:13:29.245 "model_number": "SPDK_Controller\u001f", 00:13:29.245 "method": "nvmf_create_subsystem", 00:13:29.245 "req_id": 1 00:13:29.245 } 00:13:29.245 Got JSON-RPC error response 00:13:29.245 response: 00:13:29.245 { 00:13:29.245 "code": -32602, 00:13:29.245 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.245 }' 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:29.245 { 00:13:29.245 "nqn": "nqn.2016-06.io.spdk:cnode18847", 00:13:29.245 "model_number": "SPDK_Controller\u001f", 00:13:29.245 "method": "nvmf_create_subsystem", 00:13:29.245 "req_id": 1 00:13:29.245 } 00:13:29.245 Got JSON-RPC error response 00:13:29.245 response: 00:13:29.245 { 00:13:29.245 "code": -32602, 00:13:29.245 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.245 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:29.245 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4Hvxru@R)!g"wtj?DG\+A' 00:13:29.246 06:42:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4Hvxru@R)!g"wtj?DG\+A' nqn.2016-06.io.spdk:cnode7816 00:13:29.505 [2024-07-15 06:42:16.999442] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7816: invalid serial number '4Hvxru@R)!g"wtj?DG\+A' 00:13:29.505 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:29.505 { 00:13:29.505 "nqn": "nqn.2016-06.io.spdk:cnode7816", 00:13:29.505 "serial_number": "4Hvxru@R)!g\"wtj?DG\\+A", 00:13:29.505 "method": "nvmf_create_subsystem", 00:13:29.505 "req_id": 1 00:13:29.505 } 00:13:29.505 Got JSON-RPC error response 00:13:29.505 response: 00:13:29.505 { 00:13:29.505 "code": -32602, 00:13:29.505 "message": "Invalid SN 4Hvxru@R)!g\"wtj?DG\\+A" 00:13:29.505 }' 00:13:29.505 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:29.506 { 00:13:29.506 "nqn": "nqn.2016-06.io.spdk:cnode7816", 00:13:29.506 "serial_number": "4Hvxru@R)!g\"wtj?DG\\+A", 00:13:29.506 "method": "nvmf_create_subsystem", 00:13:29.506 "req_id": 1 00:13:29.506 } 00:13:29.506 Got JSON-RPC error response 00:13:29.506 response: 00:13:29.506 { 00:13:29.506 "code": -32602, 00:13:29.506 "message": "Invalid SN 4Hvxru@R)!g\"wtj?DG\\+A" 00:13:29.506 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:29.506 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.507 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.507 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:29.507 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:29.507 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.765 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '>W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML' 00:13:29.766 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML' nqn.2016-06.io.spdk:cnode9833 00:13:30.024 [2024-07-15 06:42:17.388716] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9833: invalid model number '>W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML' 00:13:30.024 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:30.024 { 00:13:30.024 "nqn": "nqn.2016-06.io.spdk:cnode9833", 00:13:30.024 "model_number": ">W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML", 00:13:30.024 "method": "nvmf_create_subsystem", 00:13:30.024 "req_id": 1 00:13:30.024 } 00:13:30.024 Got JSON-RPC error response 00:13:30.024 response: 00:13:30.024 { 00:13:30.024 "code": -32602, 00:13:30.024 "message": "Invalid MN >W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML" 00:13:30.024 }' 00:13:30.024 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:30.024 { 00:13:30.024 "nqn": "nqn.2016-06.io.spdk:cnode9833", 00:13:30.024 "model_number": ">W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML", 00:13:30.024 "method": "nvmf_create_subsystem", 00:13:30.024 "req_id": 1 00:13:30.024 } 00:13:30.024 Got JSON-RPC error response 00:13:30.024 response: 00:13:30.024 { 00:13:30.024 "code": -32602, 00:13:30.024 "message": "Invalid MN >W#ugmc6BQG*|~!q?[OiHC-%37=JRBh/6vx3ws.ML" 00:13:30.024 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.024 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:30.282 [2024-07-15 06:42:17.637659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.282 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:30.542 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:30.542 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:30.542 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:30.542 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:30.542 06:42:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:30.542 [2024-07-15 06:42:18.135235] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:30.801 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:30.801 { 00:13:30.801 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:30.801 "listen_address": { 00:13:30.801 "trtype": "tcp", 00:13:30.801 "traddr": "", 00:13:30.801 "trsvcid": "4421" 00:13:30.801 }, 00:13:30.801 "method": "nvmf_subsystem_remove_listener", 00:13:30.801 "req_id": 1 00:13:30.801 } 00:13:30.801 Got JSON-RPC error response 00:13:30.801 response: 00:13:30.801 { 00:13:30.801 "code": -32602, 00:13:30.801 "message": "Invalid parameters" 00:13:30.801 }' 00:13:30.801 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:30.801 { 00:13:30.801 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:30.801 "listen_address": { 00:13:30.801 "trtype": "tcp", 00:13:30.801 "traddr": "", 00:13:30.801 "trsvcid": "4421" 00:13:30.801 }, 00:13:30.801 "method": "nvmf_subsystem_remove_listener", 00:13:30.801 "req_id": 1 00:13:30.801 } 00:13:30.801 Got JSON-RPC error response 00:13:30.801 response: 00:13:30.801 { 00:13:30.801 "code": -32602, 00:13:30.801 "message": "Invalid parameters" 00:13:30.801 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:30.801 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4523 -i 0 00:13:30.801 [2024-07-15 06:42:18.392039] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4523: invalid cntlid range [0-65519] 00:13:30.801 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:30.801 { 00:13:30.801 "nqn": "nqn.2016-06.io.spdk:cnode4523", 00:13:30.801 "min_cntlid": 0, 00:13:30.801 "method": "nvmf_create_subsystem", 00:13:30.801 "req_id": 1 00:13:30.801 } 00:13:30.801 Got JSON-RPC error response 00:13:30.801 response: 00:13:30.801 { 00:13:30.801 "code": -32602, 00:13:30.801 "message": "Invalid cntlid range [0-65519]" 00:13:30.801 }' 00:13:30.801 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:30.801 { 00:13:30.801 "nqn": "nqn.2016-06.io.spdk:cnode4523", 00:13:30.801 "min_cntlid": 0, 00:13:30.801 "method": "nvmf_create_subsystem", 00:13:30.801 "req_id": 1 00:13:30.801 } 00:13:30.801 Got JSON-RPC error response 00:13:30.801 response: 00:13:30.801 { 00:13:30.801 "code": -32602, 00:13:30.801 "message": "Invalid cntlid range [0-65519]" 00:13:30.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.058 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4928 -i 65520 00:13:31.058 [2024-07-15 06:42:18.632811] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4928: invalid cntlid range [65520-65519] 00:13:31.058 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:31.058 { 00:13:31.058 "nqn": "nqn.2016-06.io.spdk:cnode4928", 00:13:31.058 "min_cntlid": 65520, 00:13:31.058 "method": "nvmf_create_subsystem", 00:13:31.058 "req_id": 1 00:13:31.058 } 00:13:31.058 Got JSON-RPC error response 00:13:31.058 response: 00:13:31.058 { 00:13:31.058 "code": -32602, 00:13:31.058 "message": "Invalid cntlid range [65520-65519]" 00:13:31.058 }' 00:13:31.058 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:31.058 { 00:13:31.058 "nqn": "nqn.2016-06.io.spdk:cnode4928", 00:13:31.058 "min_cntlid": 65520, 00:13:31.058 "method": "nvmf_create_subsystem", 00:13:31.058 "req_id": 1 00:13:31.058 } 00:13:31.059 Got JSON-RPC error response 00:13:31.059 response: 00:13:31.059 { 00:13:31.059 "code": -32602, 00:13:31.059 "message": "Invalid cntlid range [65520-65519]" 00:13:31.059 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.059 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6615 -I 0 00:13:31.316 [2024-07-15 06:42:18.885699] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6615: invalid cntlid range [1-0] 00:13:31.316 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:31.316 { 00:13:31.316 "nqn": "nqn.2016-06.io.spdk:cnode6615", 00:13:31.316 "max_cntlid": 0, 00:13:31.316 "method": "nvmf_create_subsystem", 00:13:31.316 "req_id": 1 00:13:31.316 } 00:13:31.316 Got JSON-RPC error response 00:13:31.316 response: 00:13:31.316 { 00:13:31.316 "code": -32602, 00:13:31.316 "message": "Invalid cntlid range [1-0]" 00:13:31.316 }' 00:13:31.316 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:31.316 { 00:13:31.316 "nqn": "nqn.2016-06.io.spdk:cnode6615", 00:13:31.316 "max_cntlid": 0, 00:13:31.316 "method": "nvmf_create_subsystem", 00:13:31.316 "req_id": 1 00:13:31.316 } 00:13:31.316 Got JSON-RPC error response 00:13:31.316 response: 00:13:31.316 { 00:13:31.316 "code": -32602, 00:13:31.316 "message": "Invalid cntlid range [1-0]" 00:13:31.316 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.316 06:42:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7126 -I 65520 00:13:31.573 [2024-07-15 06:42:19.134511] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7126: invalid cntlid range [1-65520] 00:13:31.573 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:31.573 { 00:13:31.573 "nqn": "nqn.2016-06.io.spdk:cnode7126", 00:13:31.573 "max_cntlid": 65520, 00:13:31.573 "method": "nvmf_create_subsystem", 00:13:31.573 "req_id": 1 00:13:31.573 } 00:13:31.573 Got JSON-RPC error response 00:13:31.573 response: 00:13:31.573 { 00:13:31.573 "code": -32602, 00:13:31.573 "message": "Invalid cntlid range [1-65520]" 00:13:31.573 }' 00:13:31.573 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:31.573 { 00:13:31.573 "nqn": "nqn.2016-06.io.spdk:cnode7126", 00:13:31.573 "max_cntlid": 65520, 00:13:31.573 "method": "nvmf_create_subsystem", 00:13:31.573 "req_id": 1 00:13:31.573 } 00:13:31.573 Got JSON-RPC error response 00:13:31.573 response: 00:13:31.573 { 00:13:31.573 "code": -32602, 00:13:31.573 "message": "Invalid cntlid range [1-65520]" 00:13:31.573 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.573 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27723 -i 6 -I 5 00:13:31.831 [2024-07-15 06:42:19.395401] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27723: invalid cntlid range [6-5] 00:13:31.831 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:31.831 { 00:13:31.831 "nqn": "nqn.2016-06.io.spdk:cnode27723", 00:13:31.831 "min_cntlid": 6, 00:13:31.831 "max_cntlid": 5, 00:13:31.831 "method": "nvmf_create_subsystem", 00:13:31.831 "req_id": 1 00:13:31.831 } 00:13:31.831 Got JSON-RPC error response 00:13:31.831 response: 00:13:31.831 { 00:13:31.831 "code": -32602, 00:13:31.831 "message": "Invalid cntlid range [6-5]" 00:13:31.831 }' 00:13:31.831 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:31.831 { 00:13:31.831 "nqn": "nqn.2016-06.io.spdk:cnode27723", 00:13:31.831 "min_cntlid": 6, 00:13:31.831 "max_cntlid": 5, 00:13:31.831 "method": "nvmf_create_subsystem", 00:13:31.831 "req_id": 1 00:13:31.831 } 00:13:31.831 Got JSON-RPC error response 00:13:31.831 response: 00:13:31.831 { 00:13:31.831 "code": -32602, 00:13:31.831 "message": "Invalid cntlid range [6-5]" 00:13:31.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.831 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:32.090 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:32.090 { 00:13:32.090 "name": "foobar", 00:13:32.090 "method": "nvmf_delete_target", 00:13:32.090 "req_id": 1 00:13:32.090 } 00:13:32.090 Got JSON-RPC error response 00:13:32.090 response: 00:13:32.090 { 00:13:32.090 "code": -32602, 00:13:32.090 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:32.090 }' 00:13:32.090 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:32.091 { 00:13:32.091 "name": "foobar", 00:13:32.091 "method": "nvmf_delete_target", 00:13:32.091 "req_id": 1 00:13:32.091 } 00:13:32.091 Got JSON-RPC error response 00:13:32.091 response: 00:13:32.091 { 00:13:32.091 "code": -32602, 00:13:32.091 "message": "The specified target doesn't exist, cannot delete it." 00:13:32.091 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.091 rmmod nvme_tcp 00:13:32.091 rmmod nvme_fabrics 00:13:32.091 rmmod nvme_keyring 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 577987 ']' 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 577987 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 577987 ']' 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 577987 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 577987 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 577987' 00:13:32.091 killing process with pid 577987 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 577987 00:13:32.091 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 577987 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.352 06:42:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.286 06:42:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.286 00:13:34.286 real 0m8.491s 00:13:34.286 user 0m20.068s 00:13:34.286 sys 0m2.337s 00:13:34.286 06:42:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:34.286 06:42:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.286 ************************************ 00:13:34.286 END TEST nvmf_invalid 00:13:34.286 ************************************ 00:13:34.545 06:42:21 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.545 06:42:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:34.545 06:42:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:34.545 06:42:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:34.545 ************************************ 00:13:34.545 START TEST nvmf_abort 00:13:34.545 ************************************ 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.545 * Looking for test storage... 00:13:34.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.545 06:42:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.545 06:42:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:36.450 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:36.450 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:36.450 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:36.450 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.450 06:42:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:13:36.450 00:13:36.450 --- 10.0.0.2 ping statistics --- 00:13:36.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.450 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:13:36.450 00:13:36.450 --- 10.0.0.1 ping statistics --- 00:13:36.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.450 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.450 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=580620 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 580620 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 580620 ']' 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:36.709 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.709 [2024-07-15 06:42:24.140405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:36.709 [2024-07-15 06:42:24.140488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.709 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.709 [2024-07-15 06:42:24.211706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.709 [2024-07-15 06:42:24.304491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.709 [2024-07-15 06:42:24.304551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.709 [2024-07-15 06:42:24.304568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.709 [2024-07-15 06:42:24.304582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.709 [2024-07-15 06:42:24.304594] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.709 [2024-07-15 06:42:24.304676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.709 [2024-07-15 06:42:24.304730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.709 [2024-07-15 06:42:24.304733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 [2024-07-15 06:42:24.459618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 Malloc0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 Delay0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.967 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.968 [2024-07-15 06:42:24.531525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.968 06:42:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:36.968 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.225 [2024-07-15 06:42:24.596263] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:39.756 Initializing NVMe Controllers 00:13:39.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:39.756 controller IO queue size 128 less than required 00:13:39.756 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:39.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:39.757 Initialization complete. Launching workers. 00:13:39.757 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34113 00:13:39.757 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34174, failed to submit 62 00:13:39.757 success 34117, unsuccess 57, failed 0 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.757 rmmod nvme_tcp 00:13:39.757 rmmod nvme_fabrics 00:13:39.757 rmmod nvme_keyring 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 580620 ']' 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 580620 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 580620 ']' 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 580620 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 580620 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 580620' 00:13:39.757 killing process with pid 580620 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 580620 00:13:39.757 06:42:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 580620 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.757 06:42:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.666 06:42:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.666 00:13:41.666 real 0m7.208s 00:13:41.666 user 0m10.588s 00:13:41.666 sys 0m2.468s 00:13:41.666 06:42:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.666 06:42:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.666 ************************************ 00:13:41.666 END TEST nvmf_abort 00:13:41.666 ************************************ 00:13:41.666 06:42:29 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.666 06:42:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.666 06:42:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.666 06:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.666 ************************************ 00:13:41.666 START TEST nvmf_ns_hotplug_stress 00:13:41.666 ************************************ 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.666 * Looking for test storage... 00:13:41.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.666 06:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.568 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.828 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:43.829 00:13:43.829 --- 10.0.0.2 ping statistics --- 00:13:43.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.829 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:43.829 00:13:43.829 --- 10.0.0.1 ping statistics --- 00:13:43.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.829 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=582843 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 582843 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 582843 ']' 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.829 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 [2024-07-15 06:42:31.406605] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:43.829 [2024-07-15 06:42:31.406689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.087 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.087 [2024-07-15 06:42:31.478208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.087 [2024-07-15 06:42:31.571848] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.087 [2024-07-15 06:42:31.571934] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.087 [2024-07-15 06:42:31.571951] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.087 [2024-07-15 06:42:31.571964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.087 [2024-07-15 06:42:31.571976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.087 [2024-07-15 06:42:31.572061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.087 [2024-07-15 06:42:31.572120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.087 [2024-07-15 06:42:31.572123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.087 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.087 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:44.087 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.087 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.087 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.344 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:44.344 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.604 [2024-07-15 06:42:31.958884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.604 06:42:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:44.862 06:42:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.862 [2024-07-15 06:42:32.445663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.862 06:42:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.119 06:42:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:45.378 Malloc0 00:13:45.378 06:42:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:45.636 Delay0 00:13:45.636 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.893 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:46.150 NULL1 00:13:46.150 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:46.407 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=583141 00:13:46.407 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:46.407 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:46.407 06:42:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.407 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.665 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.923 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:46.923 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:47.181 true 00:13:47.181 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:47.181 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.438 06:42:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.696 06:42:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:47.696 06:42:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:47.953 true 00:13:47.953 06:42:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:47.953 06:42:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.917 Read completed with error (sct=0, sc=11) 00:13:48.917 06:42:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.176 06:42:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:49.176 06:42:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:49.176 true 00:13:49.176 06:42:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:49.176 06:42:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.434 06:42:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.692 06:42:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:49.692 06:42:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:49.949 true 00:13:49.949 06:42:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:49.949 06:42:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.887 06:42:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.145 06:42:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:51.145 06:42:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:51.404 true 00:13:51.404 06:42:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:51.404 06:42:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.662 06:42:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.957 06:42:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:51.957 06:42:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:52.215 true 00:13:52.215 06:42:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:52.215 06:42:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.151 06:42:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.408 06:42:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:53.408 06:42:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:53.665 true 00:13:53.665 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:53.665 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.922 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.180 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:54.180 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:54.437 true 00:13:54.437 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:54.437 06:42:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.369 06:42:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.626 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:55.626 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:55.882 true 00:13:55.882 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:55.882 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.139 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.396 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:56.396 06:42:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:56.655 true 00:13:56.655 06:42:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:56.655 06:42:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.589 06:42:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.589 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:57.589 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:57.846 true 00:13:57.846 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:57.846 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.103 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.361 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:58.361 06:42:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:58.619 true 00:13:58.619 06:42:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:13:58.619 06:42:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.553 06:42:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.811 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:59.811 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:00.068 true 00:14:00.068 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:00.068 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.326 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.584 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:00.584 06:42:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:00.584 true 00:14:00.842 06:42:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:00.842 06:42:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.775 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.033 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:02.033 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:02.291 true 00:14:02.291 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:02.291 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.549 06:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.807 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:02.807 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:03.065 true 00:14:03.065 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:03.065 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.323 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.581 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:03.581 06:42:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:03.839 true 00:14:03.839 06:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:03.839 06:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.774 06:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.032 06:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:05.032 06:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:05.290 true 00:14:05.290 06:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:05.290 06:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.548 06:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.804 06:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:05.804 06:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:06.059 true 00:14:06.059 06:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:06.059 06:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.989 06:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.247 06:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:07.247 06:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:07.504 true 00:14:07.504 06:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:07.504 06:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.762 06:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.020 06:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:08.020 06:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:08.277 true 00:14:08.277 06:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:08.277 06:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.210 06:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.467 06:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:09.467 06:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:09.467 true 00:14:09.723 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:09.723 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.724 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.981 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:09.981 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:10.238 true 00:14:10.238 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:10.238 06:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.181 06:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.437 06:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:11.437 06:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:11.693 true 00:14:11.693 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:11.693 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.950 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.208 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:12.208 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:12.466 true 00:14:12.466 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:12.466 06:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.418 06:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.724 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:13.724 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:13.724 true 00:14:13.986 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:13.986 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.986 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.243 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:14.243 06:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:14.501 true 00:14:14.501 06:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:14.501 06:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.437 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.952 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:15.952 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:15.952 true 00:14:15.952 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:15.952 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.209 06:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.465 06:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:16.465 06:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:16.723 true 00:14:16.723 06:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:16.723 06:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.655 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.655 Initializing NVMe Controllers 00:14:17.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.655 Controller IO queue size 128, less than required. 00:14:17.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.655 Controller IO queue size 128, less than required. 00:14:17.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:17.655 Initialization complete. Launching workers. 00:14:17.655 ======================================================== 00:14:17.655 Latency(us) 00:14:17.655 Device Information : IOPS MiB/s Average min max 00:14:17.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 626.30 0.31 105237.66 2813.87 1014384.05 00:14:17.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10459.00 5.11 12202.13 1689.27 448523.69 00:14:17.655 ======================================================== 00:14:17.655 Total : 11085.30 5.41 17458.48 1689.27 1014384.05 00:14:17.655 00:14:17.913 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:17.913 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:18.171 true 00:14:18.171 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 583141 00:14:18.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (583141) - No such process 00:14:18.171 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 583141 00:14:18.171 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.428 06:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.686 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:18.686 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:18.686 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:18.686 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.686 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:18.944 null0 00:14:18.944 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.944 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.944 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:19.203 null1 00:14:19.203 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.203 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.203 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:19.203 null2 00:14:19.461 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.461 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.461 06:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:19.461 null3 00:14:19.461 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.461 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.461 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:19.718 null4 00:14:19.718 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.718 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.718 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:19.976 null5 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:20.235 null6 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.235 06:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:20.494 null7 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.754 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 587319 587320 587321 587324 587326 587328 587330 587332 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.755 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.014 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.273 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.532 06:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.790 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.791 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.049 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.308 06:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.567 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.825 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.084 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.343 06:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.602 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.861 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.120 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.378 06:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.637 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.952 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.209 06:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.467 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.726 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.982 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.239 rmmod nvme_tcp 00:14:26.239 rmmod nvme_fabrics 00:14:26.239 rmmod nvme_keyring 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 582843 ']' 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 582843 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 582843 ']' 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 582843 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 582843 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 582843' 00:14:26.239 killing process with pid 582843 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 582843 00:14:26.239 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 582843 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.498 06:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.399 06:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.399 00:14:28.399 real 0m46.792s 00:14:28.399 user 3m33.790s 00:14:28.399 sys 0m16.269s 00:14:28.399 06:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.399 06:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.399 ************************************ 00:14:28.399 END TEST nvmf_ns_hotplug_stress 00:14:28.399 ************************************ 00:14:28.399 06:43:16 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.399 06:43:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:28.399 06:43:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.399 06:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.657 ************************************ 00:14:28.657 START TEST nvmf_connect_stress 00:14:28.657 ************************************ 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.657 * Looking for test storage... 00:14:28.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.657 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:28.658 06:43:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.558 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.559 06:43:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:14:30.559 00:14:30.559 --- 10.0.0.2 ping statistics --- 00:14:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.559 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:30.559 00:14:30.559 --- 10.0.0.1 ping statistics --- 00:14:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.559 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=590071 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 590071 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 590071 ']' 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:30.559 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.817 [2024-07-15 06:43:18.174920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:30.817 [2024-07-15 06:43:18.174995] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.817 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.817 [2024-07-15 06:43:18.243819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.817 [2024-07-15 06:43:18.336590] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.817 [2024-07-15 06:43:18.336650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.817 [2024-07-15 06:43:18.336666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.817 [2024-07-15 06:43:18.336680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.817 [2024-07-15 06:43:18.336691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.817 [2024-07-15 06:43:18.336775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.817 [2024-07-15 06:43:18.336960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.817 [2024-07-15 06:43:18.336964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 [2024-07-15 06:43:18.481662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 [2024-07-15 06:43:18.517094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 NULL1 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=590103 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.075 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.333 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.333 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:31.333 06:43:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.333 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.333 06:43:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.898 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.898 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:31.898 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.898 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.898 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.154 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.154 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:32.154 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.154 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.154 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.412 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.412 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:32.412 06:43:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.412 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.412 06:43:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.669 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.669 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:32.669 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.669 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.669 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.926 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.926 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:32.926 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.926 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.926 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.490 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.490 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:33.490 06:43:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.490 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.490 06:43:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.747 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.747 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:33.747 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.747 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.747 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.004 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.004 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:34.004 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.004 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.004 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.261 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.261 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:34.261 06:43:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.261 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.261 06:43:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.518 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.518 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:34.518 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.518 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.518 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.081 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.081 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:35.081 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.081 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.081 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.339 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.339 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:35.339 06:43:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.339 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.339 06:43:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.596 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.596 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:35.596 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.596 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.596 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.852 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.852 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:35.852 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.852 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.852 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.108 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:36.108 06:43:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.108 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.108 06:43:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.672 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.672 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:36.672 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.672 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.672 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.930 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.930 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:36.930 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.930 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.930 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.187 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.187 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:37.187 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.187 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.187 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.443 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.443 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:37.443 06:43:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.443 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.443 06:43:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.007 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.007 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:38.007 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.007 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.007 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.264 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.264 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:38.264 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.264 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.264 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.521 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.521 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:38.521 06:43:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.521 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.521 06:43:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.778 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.778 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:38.778 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.778 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.778 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.035 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.035 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:39.036 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.036 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.036 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.599 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.599 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:39.599 06:43:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.599 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.599 06:43:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.856 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.856 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:39.856 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.856 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.856 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.113 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.113 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:40.113 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.113 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.113 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.370 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.370 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:40.370 06:43:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.370 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.370 06:43:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.626 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.626 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:40.626 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.626 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.626 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.191 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.191 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:41.191 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.191 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.191 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.192 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 590103 00:14:41.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (590103) - No such process 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 590103 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.484 rmmod nvme_tcp 00:14:41.484 rmmod nvme_fabrics 00:14:41.484 rmmod nvme_keyring 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 590071 ']' 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 590071 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 590071 ']' 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 590071 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 590071 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 590071' 00:14:41.484 killing process with pid 590071 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 590071 00:14:41.484 06:43:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 590071 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.743 06:43:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.648 06:43:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.648 00:14:43.648 real 0m15.176s 00:14:43.648 user 0m38.174s 00:14:43.648 sys 0m5.832s 00:14:43.648 06:43:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.648 06:43:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.648 ************************************ 00:14:43.648 END TEST nvmf_connect_stress 00:14:43.648 ************************************ 00:14:43.648 06:43:31 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.648 06:43:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:43.648 06:43:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.648 06:43:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.648 ************************************ 00:14:43.648 START TEST nvmf_fused_ordering 00:14:43.648 ************************************ 00:14:43.648 06:43:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.906 * Looking for test storage... 00:14:43.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.906 06:43:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.907 06:43:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:14:45.808 00:14:45.808 --- 10.0.0.2 ping statistics --- 00:14:45.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.808 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:14:45.808 00:14:45.808 --- 10.0.0.1 ping statistics --- 00:14:45.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.808 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.808 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=593244 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 593244 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 593244 ']' 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.065 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 [2024-07-15 06:43:33.483261] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:46.065 [2024-07-15 06:43:33.483359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.065 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.065 [2024-07-15 06:43:33.550788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.065 [2024-07-15 06:43:33.637627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.065 [2024-07-15 06:43:33.637686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.065 [2024-07-15 06:43:33.637712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.065 [2024-07-15 06:43:33.637726] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.065 [2024-07-15 06:43:33.637739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.065 [2024-07-15 06:43:33.637770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 [2024-07-15 06:43:33.786261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 [2024-07-15 06:43:33.802489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 NULL1 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.322 06:43:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:46.322 [2024-07-15 06:43:33.847067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:46.322 [2024-07-15 06:43:33.847109] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593386 ] 00:14:46.322 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.885 Attached to nqn.2016-06.io.spdk:cnode1 00:14:46.885 Namespace ID: 1 size: 1GB 00:14:46.885 fused_ordering(0) 00:14:46.885 fused_ordering(1) 00:14:46.885 fused_ordering(2) 00:14:46.885 fused_ordering(3) 00:14:46.885 fused_ordering(4) 00:14:46.885 fused_ordering(5) 00:14:46.885 fused_ordering(6) 00:14:46.885 fused_ordering(7) 00:14:46.885 fused_ordering(8) 00:14:46.885 fused_ordering(9) 00:14:46.885 fused_ordering(10) 00:14:46.885 fused_ordering(11) 00:14:46.885 fused_ordering(12) 00:14:46.885 fused_ordering(13) 00:14:46.885 fused_ordering(14) 00:14:46.885 fused_ordering(15) 00:14:46.885 fused_ordering(16) 00:14:46.885 fused_ordering(17) 00:14:46.885 fused_ordering(18) 00:14:46.885 fused_ordering(19) 00:14:46.885 fused_ordering(20) 00:14:46.885 fused_ordering(21) 00:14:46.885 fused_ordering(22) 00:14:46.885 fused_ordering(23) 00:14:46.885 fused_ordering(24) 00:14:46.885 fused_ordering(25) 00:14:46.885 fused_ordering(26) 00:14:46.885 fused_ordering(27) 00:14:46.885 fused_ordering(28) 00:14:46.885 fused_ordering(29) 00:14:46.885 fused_ordering(30) 00:14:46.885 fused_ordering(31) 00:14:46.885 fused_ordering(32) 00:14:46.885 fused_ordering(33) 00:14:46.885 fused_ordering(34) 00:14:46.885 fused_ordering(35) 00:14:46.885 fused_ordering(36) 00:14:46.885 fused_ordering(37) 00:14:46.885 fused_ordering(38) 00:14:46.885 fused_ordering(39) 00:14:46.885 fused_ordering(40) 00:14:46.885 fused_ordering(41) 00:14:46.885 fused_ordering(42) 00:14:46.885 fused_ordering(43) 00:14:46.885 fused_ordering(44) 00:14:46.885 fused_ordering(45) 00:14:46.885 fused_ordering(46) 00:14:46.885 fused_ordering(47) 00:14:46.885 fused_ordering(48) 00:14:46.885 fused_ordering(49) 00:14:46.885 fused_ordering(50) 00:14:46.885 fused_ordering(51) 00:14:46.885 fused_ordering(52) 00:14:46.885 fused_ordering(53) 00:14:46.885 fused_ordering(54) 00:14:46.885 fused_ordering(55) 00:14:46.885 fused_ordering(56) 00:14:46.885 fused_ordering(57) 00:14:46.885 fused_ordering(58) 00:14:46.885 fused_ordering(59) 00:14:46.885 fused_ordering(60) 00:14:46.885 fused_ordering(61) 00:14:46.885 fused_ordering(62) 00:14:46.885 fused_ordering(63) 00:14:46.885 fused_ordering(64) 00:14:46.885 fused_ordering(65) 00:14:46.885 fused_ordering(66) 00:14:46.885 fused_ordering(67) 00:14:46.885 fused_ordering(68) 00:14:46.885 fused_ordering(69) 00:14:46.885 fused_ordering(70) 00:14:46.885 fused_ordering(71) 00:14:46.885 fused_ordering(72) 00:14:46.885 fused_ordering(73) 00:14:46.885 fused_ordering(74) 00:14:46.885 fused_ordering(75) 00:14:46.885 fused_ordering(76) 00:14:46.885 fused_ordering(77) 00:14:46.885 fused_ordering(78) 00:14:46.885 fused_ordering(79) 00:14:46.885 fused_ordering(80) 00:14:46.885 fused_ordering(81) 00:14:46.885 fused_ordering(82) 00:14:46.885 fused_ordering(83) 00:14:46.885 fused_ordering(84) 00:14:46.885 fused_ordering(85) 00:14:46.885 fused_ordering(86) 00:14:46.885 fused_ordering(87) 00:14:46.885 fused_ordering(88) 00:14:46.885 fused_ordering(89) 00:14:46.885 fused_ordering(90) 00:14:46.885 fused_ordering(91) 00:14:46.885 fused_ordering(92) 00:14:46.885 fused_ordering(93) 00:14:46.885 fused_ordering(94) 00:14:46.885 fused_ordering(95) 00:14:46.885 fused_ordering(96) 00:14:46.885 fused_ordering(97) 00:14:46.885 fused_ordering(98) 00:14:46.885 fused_ordering(99) 00:14:46.885 fused_ordering(100) 00:14:46.885 fused_ordering(101) 00:14:46.885 fused_ordering(102) 00:14:46.885 fused_ordering(103) 00:14:46.885 fused_ordering(104) 00:14:46.885 fused_ordering(105) 00:14:46.885 fused_ordering(106) 00:14:46.885 fused_ordering(107) 00:14:46.885 fused_ordering(108) 00:14:46.885 fused_ordering(109) 00:14:46.885 fused_ordering(110) 00:14:46.886 fused_ordering(111) 00:14:46.886 fused_ordering(112) 00:14:46.886 fused_ordering(113) 00:14:46.886 fused_ordering(114) 00:14:46.886 fused_ordering(115) 00:14:46.886 fused_ordering(116) 00:14:46.886 fused_ordering(117) 00:14:46.886 fused_ordering(118) 00:14:46.886 fused_ordering(119) 00:14:46.886 fused_ordering(120) 00:14:46.886 fused_ordering(121) 00:14:46.886 fused_ordering(122) 00:14:46.886 fused_ordering(123) 00:14:46.886 fused_ordering(124) 00:14:46.886 fused_ordering(125) 00:14:46.886 fused_ordering(126) 00:14:46.886 fused_ordering(127) 00:14:46.886 fused_ordering(128) 00:14:46.886 fused_ordering(129) 00:14:46.886 fused_ordering(130) 00:14:46.886 fused_ordering(131) 00:14:46.886 fused_ordering(132) 00:14:46.886 fused_ordering(133) 00:14:46.886 fused_ordering(134) 00:14:46.886 fused_ordering(135) 00:14:46.886 fused_ordering(136) 00:14:46.886 fused_ordering(137) 00:14:46.886 fused_ordering(138) 00:14:46.886 fused_ordering(139) 00:14:46.886 fused_ordering(140) 00:14:46.886 fused_ordering(141) 00:14:46.886 fused_ordering(142) 00:14:46.886 fused_ordering(143) 00:14:46.886 fused_ordering(144) 00:14:46.886 fused_ordering(145) 00:14:46.886 fused_ordering(146) 00:14:46.886 fused_ordering(147) 00:14:46.886 fused_ordering(148) 00:14:46.886 fused_ordering(149) 00:14:46.886 fused_ordering(150) 00:14:46.886 fused_ordering(151) 00:14:46.886 fused_ordering(152) 00:14:46.886 fused_ordering(153) 00:14:46.886 fused_ordering(154) 00:14:46.886 fused_ordering(155) 00:14:46.886 fused_ordering(156) 00:14:46.886 fused_ordering(157) 00:14:46.886 fused_ordering(158) 00:14:46.886 fused_ordering(159) 00:14:46.886 fused_ordering(160) 00:14:46.886 fused_ordering(161) 00:14:46.886 fused_ordering(162) 00:14:46.886 fused_ordering(163) 00:14:46.886 fused_ordering(164) 00:14:46.886 fused_ordering(165) 00:14:46.886 fused_ordering(166) 00:14:46.886 fused_ordering(167) 00:14:46.886 fused_ordering(168) 00:14:46.886 fused_ordering(169) 00:14:46.886 fused_ordering(170) 00:14:46.886 fused_ordering(171) 00:14:46.886 fused_ordering(172) 00:14:46.886 fused_ordering(173) 00:14:46.886 fused_ordering(174) 00:14:46.886 fused_ordering(175) 00:14:46.886 fused_ordering(176) 00:14:46.886 fused_ordering(177) 00:14:46.886 fused_ordering(178) 00:14:46.886 fused_ordering(179) 00:14:46.886 fused_ordering(180) 00:14:46.886 fused_ordering(181) 00:14:46.886 fused_ordering(182) 00:14:46.886 fused_ordering(183) 00:14:46.886 fused_ordering(184) 00:14:46.886 fused_ordering(185) 00:14:46.886 fused_ordering(186) 00:14:46.886 fused_ordering(187) 00:14:46.886 fused_ordering(188) 00:14:46.886 fused_ordering(189) 00:14:46.886 fused_ordering(190) 00:14:46.886 fused_ordering(191) 00:14:46.886 fused_ordering(192) 00:14:46.886 fused_ordering(193) 00:14:46.886 fused_ordering(194) 00:14:46.886 fused_ordering(195) 00:14:46.886 fused_ordering(196) 00:14:46.886 fused_ordering(197) 00:14:46.886 fused_ordering(198) 00:14:46.886 fused_ordering(199) 00:14:46.886 fused_ordering(200) 00:14:46.886 fused_ordering(201) 00:14:46.886 fused_ordering(202) 00:14:46.886 fused_ordering(203) 00:14:46.886 fused_ordering(204) 00:14:46.886 fused_ordering(205) 00:14:47.448 fused_ordering(206) 00:14:47.448 fused_ordering(207) 00:14:47.448 fused_ordering(208) 00:14:47.448 fused_ordering(209) 00:14:47.448 fused_ordering(210) 00:14:47.448 fused_ordering(211) 00:14:47.448 fused_ordering(212) 00:14:47.448 fused_ordering(213) 00:14:47.448 fused_ordering(214) 00:14:47.448 fused_ordering(215) 00:14:47.448 fused_ordering(216) 00:14:47.448 fused_ordering(217) 00:14:47.448 fused_ordering(218) 00:14:47.448 fused_ordering(219) 00:14:47.448 fused_ordering(220) 00:14:47.448 fused_ordering(221) 00:14:47.448 fused_ordering(222) 00:14:47.448 fused_ordering(223) 00:14:47.448 fused_ordering(224) 00:14:47.448 fused_ordering(225) 00:14:47.448 fused_ordering(226) 00:14:47.448 fused_ordering(227) 00:14:47.448 fused_ordering(228) 00:14:47.448 fused_ordering(229) 00:14:47.448 fused_ordering(230) 00:14:47.448 fused_ordering(231) 00:14:47.448 fused_ordering(232) 00:14:47.448 fused_ordering(233) 00:14:47.448 fused_ordering(234) 00:14:47.448 fused_ordering(235) 00:14:47.448 fused_ordering(236) 00:14:47.448 fused_ordering(237) 00:14:47.448 fused_ordering(238) 00:14:47.448 fused_ordering(239) 00:14:47.448 fused_ordering(240) 00:14:47.448 fused_ordering(241) 00:14:47.448 fused_ordering(242) 00:14:47.448 fused_ordering(243) 00:14:47.448 fused_ordering(244) 00:14:47.448 fused_ordering(245) 00:14:47.448 fused_ordering(246) 00:14:47.448 fused_ordering(247) 00:14:47.448 fused_ordering(248) 00:14:47.448 fused_ordering(249) 00:14:47.448 fused_ordering(250) 00:14:47.448 fused_ordering(251) 00:14:47.448 fused_ordering(252) 00:14:47.448 fused_ordering(253) 00:14:47.448 fused_ordering(254) 00:14:47.448 fused_ordering(255) 00:14:47.448 fused_ordering(256) 00:14:47.448 fused_ordering(257) 00:14:47.448 fused_ordering(258) 00:14:47.448 fused_ordering(259) 00:14:47.448 fused_ordering(260) 00:14:47.448 fused_ordering(261) 00:14:47.448 fused_ordering(262) 00:14:47.448 fused_ordering(263) 00:14:47.448 fused_ordering(264) 00:14:47.448 fused_ordering(265) 00:14:47.448 fused_ordering(266) 00:14:47.448 fused_ordering(267) 00:14:47.448 fused_ordering(268) 00:14:47.448 fused_ordering(269) 00:14:47.448 fused_ordering(270) 00:14:47.448 fused_ordering(271) 00:14:47.448 fused_ordering(272) 00:14:47.448 fused_ordering(273) 00:14:47.448 fused_ordering(274) 00:14:47.448 fused_ordering(275) 00:14:47.448 fused_ordering(276) 00:14:47.448 fused_ordering(277) 00:14:47.448 fused_ordering(278) 00:14:47.448 fused_ordering(279) 00:14:47.448 fused_ordering(280) 00:14:47.448 fused_ordering(281) 00:14:47.448 fused_ordering(282) 00:14:47.448 fused_ordering(283) 00:14:47.448 fused_ordering(284) 00:14:47.448 fused_ordering(285) 00:14:47.448 fused_ordering(286) 00:14:47.448 fused_ordering(287) 00:14:47.448 fused_ordering(288) 00:14:47.448 fused_ordering(289) 00:14:47.448 fused_ordering(290) 00:14:47.448 fused_ordering(291) 00:14:47.448 fused_ordering(292) 00:14:47.448 fused_ordering(293) 00:14:47.448 fused_ordering(294) 00:14:47.448 fused_ordering(295) 00:14:47.449 fused_ordering(296) 00:14:47.449 fused_ordering(297) 00:14:47.449 fused_ordering(298) 00:14:47.449 fused_ordering(299) 00:14:47.449 fused_ordering(300) 00:14:47.449 fused_ordering(301) 00:14:47.449 fused_ordering(302) 00:14:47.449 fused_ordering(303) 00:14:47.449 fused_ordering(304) 00:14:47.449 fused_ordering(305) 00:14:47.449 fused_ordering(306) 00:14:47.449 fused_ordering(307) 00:14:47.449 fused_ordering(308) 00:14:47.449 fused_ordering(309) 00:14:47.449 fused_ordering(310) 00:14:47.449 fused_ordering(311) 00:14:47.449 fused_ordering(312) 00:14:47.449 fused_ordering(313) 00:14:47.449 fused_ordering(314) 00:14:47.449 fused_ordering(315) 00:14:47.449 fused_ordering(316) 00:14:47.449 fused_ordering(317) 00:14:47.449 fused_ordering(318) 00:14:47.449 fused_ordering(319) 00:14:47.449 fused_ordering(320) 00:14:47.449 fused_ordering(321) 00:14:47.449 fused_ordering(322) 00:14:47.449 fused_ordering(323) 00:14:47.449 fused_ordering(324) 00:14:47.449 fused_ordering(325) 00:14:47.449 fused_ordering(326) 00:14:47.449 fused_ordering(327) 00:14:47.449 fused_ordering(328) 00:14:47.449 fused_ordering(329) 00:14:47.449 fused_ordering(330) 00:14:47.449 fused_ordering(331) 00:14:47.449 fused_ordering(332) 00:14:47.449 fused_ordering(333) 00:14:47.449 fused_ordering(334) 00:14:47.449 fused_ordering(335) 00:14:47.449 fused_ordering(336) 00:14:47.449 fused_ordering(337) 00:14:47.449 fused_ordering(338) 00:14:47.449 fused_ordering(339) 00:14:47.449 fused_ordering(340) 00:14:47.449 fused_ordering(341) 00:14:47.449 fused_ordering(342) 00:14:47.449 fused_ordering(343) 00:14:47.449 fused_ordering(344) 00:14:47.449 fused_ordering(345) 00:14:47.449 fused_ordering(346) 00:14:47.449 fused_ordering(347) 00:14:47.449 fused_ordering(348) 00:14:47.449 fused_ordering(349) 00:14:47.449 fused_ordering(350) 00:14:47.449 fused_ordering(351) 00:14:47.449 fused_ordering(352) 00:14:47.449 fused_ordering(353) 00:14:47.449 fused_ordering(354) 00:14:47.449 fused_ordering(355) 00:14:47.449 fused_ordering(356) 00:14:47.449 fused_ordering(357) 00:14:47.449 fused_ordering(358) 00:14:47.449 fused_ordering(359) 00:14:47.449 fused_ordering(360) 00:14:47.449 fused_ordering(361) 00:14:47.449 fused_ordering(362) 00:14:47.449 fused_ordering(363) 00:14:47.449 fused_ordering(364) 00:14:47.449 fused_ordering(365) 00:14:47.449 fused_ordering(366) 00:14:47.449 fused_ordering(367) 00:14:47.449 fused_ordering(368) 00:14:47.449 fused_ordering(369) 00:14:47.449 fused_ordering(370) 00:14:47.449 fused_ordering(371) 00:14:47.449 fused_ordering(372) 00:14:47.449 fused_ordering(373) 00:14:47.449 fused_ordering(374) 00:14:47.449 fused_ordering(375) 00:14:47.449 fused_ordering(376) 00:14:47.449 fused_ordering(377) 00:14:47.449 fused_ordering(378) 00:14:47.449 fused_ordering(379) 00:14:47.449 fused_ordering(380) 00:14:47.449 fused_ordering(381) 00:14:47.449 fused_ordering(382) 00:14:47.449 fused_ordering(383) 00:14:47.449 fused_ordering(384) 00:14:47.449 fused_ordering(385) 00:14:47.449 fused_ordering(386) 00:14:47.449 fused_ordering(387) 00:14:47.449 fused_ordering(388) 00:14:47.449 fused_ordering(389) 00:14:47.449 fused_ordering(390) 00:14:47.449 fused_ordering(391) 00:14:47.449 fused_ordering(392) 00:14:47.449 fused_ordering(393) 00:14:47.449 fused_ordering(394) 00:14:47.449 fused_ordering(395) 00:14:47.449 fused_ordering(396) 00:14:47.449 fused_ordering(397) 00:14:47.449 fused_ordering(398) 00:14:47.449 fused_ordering(399) 00:14:47.449 fused_ordering(400) 00:14:47.449 fused_ordering(401) 00:14:47.449 fused_ordering(402) 00:14:47.449 fused_ordering(403) 00:14:47.449 fused_ordering(404) 00:14:47.449 fused_ordering(405) 00:14:47.449 fused_ordering(406) 00:14:47.449 fused_ordering(407) 00:14:47.449 fused_ordering(408) 00:14:47.449 fused_ordering(409) 00:14:47.449 fused_ordering(410) 00:14:48.012 fused_ordering(411) 00:14:48.012 fused_ordering(412) 00:14:48.012 fused_ordering(413) 00:14:48.012 fused_ordering(414) 00:14:48.012 fused_ordering(415) 00:14:48.012 fused_ordering(416) 00:14:48.012 fused_ordering(417) 00:14:48.012 fused_ordering(418) 00:14:48.012 fused_ordering(419) 00:14:48.012 fused_ordering(420) 00:14:48.012 fused_ordering(421) 00:14:48.012 fused_ordering(422) 00:14:48.012 fused_ordering(423) 00:14:48.012 fused_ordering(424) 00:14:48.012 fused_ordering(425) 00:14:48.012 fused_ordering(426) 00:14:48.012 fused_ordering(427) 00:14:48.012 fused_ordering(428) 00:14:48.012 fused_ordering(429) 00:14:48.012 fused_ordering(430) 00:14:48.012 fused_ordering(431) 00:14:48.012 fused_ordering(432) 00:14:48.012 fused_ordering(433) 00:14:48.012 fused_ordering(434) 00:14:48.012 fused_ordering(435) 00:14:48.012 fused_ordering(436) 00:14:48.012 fused_ordering(437) 00:14:48.012 fused_ordering(438) 00:14:48.012 fused_ordering(439) 00:14:48.012 fused_ordering(440) 00:14:48.012 fused_ordering(441) 00:14:48.012 fused_ordering(442) 00:14:48.012 fused_ordering(443) 00:14:48.012 fused_ordering(444) 00:14:48.012 fused_ordering(445) 00:14:48.012 fused_ordering(446) 00:14:48.012 fused_ordering(447) 00:14:48.012 fused_ordering(448) 00:14:48.012 fused_ordering(449) 00:14:48.012 fused_ordering(450) 00:14:48.012 fused_ordering(451) 00:14:48.012 fused_ordering(452) 00:14:48.012 fused_ordering(453) 00:14:48.012 fused_ordering(454) 00:14:48.012 fused_ordering(455) 00:14:48.012 fused_ordering(456) 00:14:48.012 fused_ordering(457) 00:14:48.012 fused_ordering(458) 00:14:48.012 fused_ordering(459) 00:14:48.012 fused_ordering(460) 00:14:48.012 fused_ordering(461) 00:14:48.012 fused_ordering(462) 00:14:48.012 fused_ordering(463) 00:14:48.012 fused_ordering(464) 00:14:48.012 fused_ordering(465) 00:14:48.012 fused_ordering(466) 00:14:48.012 fused_ordering(467) 00:14:48.012 fused_ordering(468) 00:14:48.012 fused_ordering(469) 00:14:48.012 fused_ordering(470) 00:14:48.012 fused_ordering(471) 00:14:48.012 fused_ordering(472) 00:14:48.012 fused_ordering(473) 00:14:48.012 fused_ordering(474) 00:14:48.012 fused_ordering(475) 00:14:48.012 fused_ordering(476) 00:14:48.012 fused_ordering(477) 00:14:48.012 fused_ordering(478) 00:14:48.012 fused_ordering(479) 00:14:48.012 fused_ordering(480) 00:14:48.012 fused_ordering(481) 00:14:48.012 fused_ordering(482) 00:14:48.012 fused_ordering(483) 00:14:48.012 fused_ordering(484) 00:14:48.012 fused_ordering(485) 00:14:48.012 fused_ordering(486) 00:14:48.012 fused_ordering(487) 00:14:48.012 fused_ordering(488) 00:14:48.012 fused_ordering(489) 00:14:48.012 fused_ordering(490) 00:14:48.012 fused_ordering(491) 00:14:48.012 fused_ordering(492) 00:14:48.012 fused_ordering(493) 00:14:48.012 fused_ordering(494) 00:14:48.012 fused_ordering(495) 00:14:48.012 fused_ordering(496) 00:14:48.012 fused_ordering(497) 00:14:48.012 fused_ordering(498) 00:14:48.012 fused_ordering(499) 00:14:48.012 fused_ordering(500) 00:14:48.012 fused_ordering(501) 00:14:48.012 fused_ordering(502) 00:14:48.013 fused_ordering(503) 00:14:48.013 fused_ordering(504) 00:14:48.013 fused_ordering(505) 00:14:48.013 fused_ordering(506) 00:14:48.013 fused_ordering(507) 00:14:48.013 fused_ordering(508) 00:14:48.013 fused_ordering(509) 00:14:48.013 fused_ordering(510) 00:14:48.013 fused_ordering(511) 00:14:48.013 fused_ordering(512) 00:14:48.013 fused_ordering(513) 00:14:48.013 fused_ordering(514) 00:14:48.013 fused_ordering(515) 00:14:48.013 fused_ordering(516) 00:14:48.013 fused_ordering(517) 00:14:48.013 fused_ordering(518) 00:14:48.013 fused_ordering(519) 00:14:48.013 fused_ordering(520) 00:14:48.013 fused_ordering(521) 00:14:48.013 fused_ordering(522) 00:14:48.013 fused_ordering(523) 00:14:48.013 fused_ordering(524) 00:14:48.013 fused_ordering(525) 00:14:48.013 fused_ordering(526) 00:14:48.013 fused_ordering(527) 00:14:48.013 fused_ordering(528) 00:14:48.013 fused_ordering(529) 00:14:48.013 fused_ordering(530) 00:14:48.013 fused_ordering(531) 00:14:48.013 fused_ordering(532) 00:14:48.013 fused_ordering(533) 00:14:48.013 fused_ordering(534) 00:14:48.013 fused_ordering(535) 00:14:48.013 fused_ordering(536) 00:14:48.013 fused_ordering(537) 00:14:48.013 fused_ordering(538) 00:14:48.013 fused_ordering(539) 00:14:48.013 fused_ordering(540) 00:14:48.013 fused_ordering(541) 00:14:48.013 fused_ordering(542) 00:14:48.013 fused_ordering(543) 00:14:48.013 fused_ordering(544) 00:14:48.013 fused_ordering(545) 00:14:48.013 fused_ordering(546) 00:14:48.013 fused_ordering(547) 00:14:48.013 fused_ordering(548) 00:14:48.013 fused_ordering(549) 00:14:48.013 fused_ordering(550) 00:14:48.013 fused_ordering(551) 00:14:48.013 fused_ordering(552) 00:14:48.013 fused_ordering(553) 00:14:48.013 fused_ordering(554) 00:14:48.013 fused_ordering(555) 00:14:48.013 fused_ordering(556) 00:14:48.013 fused_ordering(557) 00:14:48.013 fused_ordering(558) 00:14:48.013 fused_ordering(559) 00:14:48.013 fused_ordering(560) 00:14:48.013 fused_ordering(561) 00:14:48.013 fused_ordering(562) 00:14:48.013 fused_ordering(563) 00:14:48.013 fused_ordering(564) 00:14:48.013 fused_ordering(565) 00:14:48.013 fused_ordering(566) 00:14:48.013 fused_ordering(567) 00:14:48.013 fused_ordering(568) 00:14:48.013 fused_ordering(569) 00:14:48.013 fused_ordering(570) 00:14:48.013 fused_ordering(571) 00:14:48.013 fused_ordering(572) 00:14:48.013 fused_ordering(573) 00:14:48.013 fused_ordering(574) 00:14:48.013 fused_ordering(575) 00:14:48.013 fused_ordering(576) 00:14:48.013 fused_ordering(577) 00:14:48.013 fused_ordering(578) 00:14:48.013 fused_ordering(579) 00:14:48.013 fused_ordering(580) 00:14:48.013 fused_ordering(581) 00:14:48.013 fused_ordering(582) 00:14:48.013 fused_ordering(583) 00:14:48.013 fused_ordering(584) 00:14:48.013 fused_ordering(585) 00:14:48.013 fused_ordering(586) 00:14:48.013 fused_ordering(587) 00:14:48.013 fused_ordering(588) 00:14:48.013 fused_ordering(589) 00:14:48.013 fused_ordering(590) 00:14:48.013 fused_ordering(591) 00:14:48.013 fused_ordering(592) 00:14:48.013 fused_ordering(593) 00:14:48.013 fused_ordering(594) 00:14:48.013 fused_ordering(595) 00:14:48.013 fused_ordering(596) 00:14:48.013 fused_ordering(597) 00:14:48.013 fused_ordering(598) 00:14:48.013 fused_ordering(599) 00:14:48.013 fused_ordering(600) 00:14:48.013 fused_ordering(601) 00:14:48.013 fused_ordering(602) 00:14:48.013 fused_ordering(603) 00:14:48.013 fused_ordering(604) 00:14:48.013 fused_ordering(605) 00:14:48.013 fused_ordering(606) 00:14:48.013 fused_ordering(607) 00:14:48.013 fused_ordering(608) 00:14:48.013 fused_ordering(609) 00:14:48.013 fused_ordering(610) 00:14:48.013 fused_ordering(611) 00:14:48.013 fused_ordering(612) 00:14:48.013 fused_ordering(613) 00:14:48.013 fused_ordering(614) 00:14:48.013 fused_ordering(615) 00:14:48.578 fused_ordering(616) 00:14:48.578 fused_ordering(617) 00:14:48.578 fused_ordering(618) 00:14:48.578 fused_ordering(619) 00:14:48.578 fused_ordering(620) 00:14:48.578 fused_ordering(621) 00:14:48.578 fused_ordering(622) 00:14:48.578 fused_ordering(623) 00:14:48.578 fused_ordering(624) 00:14:48.578 fused_ordering(625) 00:14:48.578 fused_ordering(626) 00:14:48.578 fused_ordering(627) 00:14:48.578 fused_ordering(628) 00:14:48.578 fused_ordering(629) 00:14:48.578 fused_ordering(630) 00:14:48.578 fused_ordering(631) 00:14:48.578 fused_ordering(632) 00:14:48.578 fused_ordering(633) 00:14:48.578 fused_ordering(634) 00:14:48.578 fused_ordering(635) 00:14:48.578 fused_ordering(636) 00:14:48.578 fused_ordering(637) 00:14:48.578 fused_ordering(638) 00:14:48.578 fused_ordering(639) 00:14:48.578 fused_ordering(640) 00:14:48.578 fused_ordering(641) 00:14:48.578 fused_ordering(642) 00:14:48.578 fused_ordering(643) 00:14:48.578 fused_ordering(644) 00:14:48.578 fused_ordering(645) 00:14:48.579 fused_ordering(646) 00:14:48.579 fused_ordering(647) 00:14:48.579 fused_ordering(648) 00:14:48.579 fused_ordering(649) 00:14:48.579 fused_ordering(650) 00:14:48.579 fused_ordering(651) 00:14:48.579 fused_ordering(652) 00:14:48.579 fused_ordering(653) 00:14:48.579 fused_ordering(654) 00:14:48.579 fused_ordering(655) 00:14:48.579 fused_ordering(656) 00:14:48.579 fused_ordering(657) 00:14:48.579 fused_ordering(658) 00:14:48.579 fused_ordering(659) 00:14:48.579 fused_ordering(660) 00:14:48.579 fused_ordering(661) 00:14:48.579 fused_ordering(662) 00:14:48.579 fused_ordering(663) 00:14:48.579 fused_ordering(664) 00:14:48.579 fused_ordering(665) 00:14:48.579 fused_ordering(666) 00:14:48.579 fused_ordering(667) 00:14:48.579 fused_ordering(668) 00:14:48.579 fused_ordering(669) 00:14:48.579 fused_ordering(670) 00:14:48.579 fused_ordering(671) 00:14:48.579 fused_ordering(672) 00:14:48.579 fused_ordering(673) 00:14:48.579 fused_ordering(674) 00:14:48.579 fused_ordering(675) 00:14:48.579 fused_ordering(676) 00:14:48.579 fused_ordering(677) 00:14:48.579 fused_ordering(678) 00:14:48.579 fused_ordering(679) 00:14:48.579 fused_ordering(680) 00:14:48.579 fused_ordering(681) 00:14:48.579 fused_ordering(682) 00:14:48.579 fused_ordering(683) 00:14:48.579 fused_ordering(684) 00:14:48.579 fused_ordering(685) 00:14:48.579 fused_ordering(686) 00:14:48.579 fused_ordering(687) 00:14:48.579 fused_ordering(688) 00:14:48.579 fused_ordering(689) 00:14:48.579 fused_ordering(690) 00:14:48.579 fused_ordering(691) 00:14:48.579 fused_ordering(692) 00:14:48.579 fused_ordering(693) 00:14:48.579 fused_ordering(694) 00:14:48.579 fused_ordering(695) 00:14:48.579 fused_ordering(696) 00:14:48.579 fused_ordering(697) 00:14:48.579 fused_ordering(698) 00:14:48.579 fused_ordering(699) 00:14:48.579 fused_ordering(700) 00:14:48.579 fused_ordering(701) 00:14:48.579 fused_ordering(702) 00:14:48.579 fused_ordering(703) 00:14:48.579 fused_ordering(704) 00:14:48.579 fused_ordering(705) 00:14:48.579 fused_ordering(706) 00:14:48.579 fused_ordering(707) 00:14:48.579 fused_ordering(708) 00:14:48.579 fused_ordering(709) 00:14:48.579 fused_ordering(710) 00:14:48.579 fused_ordering(711) 00:14:48.579 fused_ordering(712) 00:14:48.579 fused_ordering(713) 00:14:48.579 fused_ordering(714) 00:14:48.579 fused_ordering(715) 00:14:48.579 fused_ordering(716) 00:14:48.579 fused_ordering(717) 00:14:48.579 fused_ordering(718) 00:14:48.579 fused_ordering(719) 00:14:48.579 fused_ordering(720) 00:14:48.579 fused_ordering(721) 00:14:48.579 fused_ordering(722) 00:14:48.579 fused_ordering(723) 00:14:48.579 fused_ordering(724) 00:14:48.579 fused_ordering(725) 00:14:48.579 fused_ordering(726) 00:14:48.579 fused_ordering(727) 00:14:48.579 fused_ordering(728) 00:14:48.579 fused_ordering(729) 00:14:48.579 fused_ordering(730) 00:14:48.579 fused_ordering(731) 00:14:48.579 fused_ordering(732) 00:14:48.579 fused_ordering(733) 00:14:48.579 fused_ordering(734) 00:14:48.579 fused_ordering(735) 00:14:48.579 fused_ordering(736) 00:14:48.579 fused_ordering(737) 00:14:48.579 fused_ordering(738) 00:14:48.579 fused_ordering(739) 00:14:48.579 fused_ordering(740) 00:14:48.579 fused_ordering(741) 00:14:48.579 fused_ordering(742) 00:14:48.579 fused_ordering(743) 00:14:48.579 fused_ordering(744) 00:14:48.579 fused_ordering(745) 00:14:48.579 fused_ordering(746) 00:14:48.579 fused_ordering(747) 00:14:48.579 fused_ordering(748) 00:14:48.579 fused_ordering(749) 00:14:48.579 fused_ordering(750) 00:14:48.579 fused_ordering(751) 00:14:48.579 fused_ordering(752) 00:14:48.579 fused_ordering(753) 00:14:48.579 fused_ordering(754) 00:14:48.579 fused_ordering(755) 00:14:48.579 fused_ordering(756) 00:14:48.579 fused_ordering(757) 00:14:48.579 fused_ordering(758) 00:14:48.579 fused_ordering(759) 00:14:48.579 fused_ordering(760) 00:14:48.579 fused_ordering(761) 00:14:48.579 fused_ordering(762) 00:14:48.579 fused_ordering(763) 00:14:48.579 fused_ordering(764) 00:14:48.579 fused_ordering(765) 00:14:48.579 fused_ordering(766) 00:14:48.579 fused_ordering(767) 00:14:48.579 fused_ordering(768) 00:14:48.579 fused_ordering(769) 00:14:48.579 fused_ordering(770) 00:14:48.579 fused_ordering(771) 00:14:48.579 fused_ordering(772) 00:14:48.579 fused_ordering(773) 00:14:48.579 fused_ordering(774) 00:14:48.579 fused_ordering(775) 00:14:48.579 fused_ordering(776) 00:14:48.579 fused_ordering(777) 00:14:48.579 fused_ordering(778) 00:14:48.579 fused_ordering(779) 00:14:48.579 fused_ordering(780) 00:14:48.579 fused_ordering(781) 00:14:48.579 fused_ordering(782) 00:14:48.579 fused_ordering(783) 00:14:48.579 fused_ordering(784) 00:14:48.579 fused_ordering(785) 00:14:48.579 fused_ordering(786) 00:14:48.579 fused_ordering(787) 00:14:48.579 fused_ordering(788) 00:14:48.579 fused_ordering(789) 00:14:48.579 fused_ordering(790) 00:14:48.579 fused_ordering(791) 00:14:48.579 fused_ordering(792) 00:14:48.579 fused_ordering(793) 00:14:48.579 fused_ordering(794) 00:14:48.579 fused_ordering(795) 00:14:48.579 fused_ordering(796) 00:14:48.579 fused_ordering(797) 00:14:48.579 fused_ordering(798) 00:14:48.579 fused_ordering(799) 00:14:48.579 fused_ordering(800) 00:14:48.579 fused_ordering(801) 00:14:48.579 fused_ordering(802) 00:14:48.579 fused_ordering(803) 00:14:48.579 fused_ordering(804) 00:14:48.579 fused_ordering(805) 00:14:48.579 fused_ordering(806) 00:14:48.579 fused_ordering(807) 00:14:48.579 fused_ordering(808) 00:14:48.579 fused_ordering(809) 00:14:48.579 fused_ordering(810) 00:14:48.579 fused_ordering(811) 00:14:48.579 fused_ordering(812) 00:14:48.579 fused_ordering(813) 00:14:48.579 fused_ordering(814) 00:14:48.579 fused_ordering(815) 00:14:48.579 fused_ordering(816) 00:14:48.579 fused_ordering(817) 00:14:48.579 fused_ordering(818) 00:14:48.579 fused_ordering(819) 00:14:48.579 fused_ordering(820) 00:14:49.513 fused_ordering(821) 00:14:49.513 fused_ordering(822) 00:14:49.513 fused_ordering(823) 00:14:49.513 fused_ordering(824) 00:14:49.513 fused_ordering(825) 00:14:49.513 fused_ordering(826) 00:14:49.513 fused_ordering(827) 00:14:49.513 fused_ordering(828) 00:14:49.513 fused_ordering(829) 00:14:49.513 fused_ordering(830) 00:14:49.513 fused_ordering(831) 00:14:49.513 fused_ordering(832) 00:14:49.513 fused_ordering(833) 00:14:49.513 fused_ordering(834) 00:14:49.513 fused_ordering(835) 00:14:49.513 fused_ordering(836) 00:14:49.513 fused_ordering(837) 00:14:49.513 fused_ordering(838) 00:14:49.513 fused_ordering(839) 00:14:49.513 fused_ordering(840) 00:14:49.513 fused_ordering(841) 00:14:49.513 fused_ordering(842) 00:14:49.513 fused_ordering(843) 00:14:49.513 fused_ordering(844) 00:14:49.513 fused_ordering(845) 00:14:49.513 fused_ordering(846) 00:14:49.513 fused_ordering(847) 00:14:49.513 fused_ordering(848) 00:14:49.513 fused_ordering(849) 00:14:49.513 fused_ordering(850) 00:14:49.513 fused_ordering(851) 00:14:49.513 fused_ordering(852) 00:14:49.513 fused_ordering(853) 00:14:49.513 fused_ordering(854) 00:14:49.513 fused_ordering(855) 00:14:49.513 fused_ordering(856) 00:14:49.513 fused_ordering(857) 00:14:49.513 fused_ordering(858) 00:14:49.513 fused_ordering(859) 00:14:49.513 fused_ordering(860) 00:14:49.513 fused_ordering(861) 00:14:49.513 fused_ordering(862) 00:14:49.513 fused_ordering(863) 00:14:49.513 fused_ordering(864) 00:14:49.513 fused_ordering(865) 00:14:49.513 fused_ordering(866) 00:14:49.513 fused_ordering(867) 00:14:49.513 fused_ordering(868) 00:14:49.513 fused_ordering(869) 00:14:49.513 fused_ordering(870) 00:14:49.513 fused_ordering(871) 00:14:49.513 fused_ordering(872) 00:14:49.513 fused_ordering(873) 00:14:49.513 fused_ordering(874) 00:14:49.513 fused_ordering(875) 00:14:49.513 fused_ordering(876) 00:14:49.513 fused_ordering(877) 00:14:49.513 fused_ordering(878) 00:14:49.513 fused_ordering(879) 00:14:49.513 fused_ordering(880) 00:14:49.513 fused_ordering(881) 00:14:49.513 fused_ordering(882) 00:14:49.513 fused_ordering(883) 00:14:49.513 fused_ordering(884) 00:14:49.513 fused_ordering(885) 00:14:49.513 fused_ordering(886) 00:14:49.513 fused_ordering(887) 00:14:49.513 fused_ordering(888) 00:14:49.513 fused_ordering(889) 00:14:49.513 fused_ordering(890) 00:14:49.513 fused_ordering(891) 00:14:49.513 fused_ordering(892) 00:14:49.513 fused_ordering(893) 00:14:49.513 fused_ordering(894) 00:14:49.513 fused_ordering(895) 00:14:49.513 fused_ordering(896) 00:14:49.513 fused_ordering(897) 00:14:49.513 fused_ordering(898) 00:14:49.513 fused_ordering(899) 00:14:49.513 fused_ordering(900) 00:14:49.513 fused_ordering(901) 00:14:49.513 fused_ordering(902) 00:14:49.513 fused_ordering(903) 00:14:49.513 fused_ordering(904) 00:14:49.513 fused_ordering(905) 00:14:49.513 fused_ordering(906) 00:14:49.513 fused_ordering(907) 00:14:49.513 fused_ordering(908) 00:14:49.513 fused_ordering(909) 00:14:49.513 fused_ordering(910) 00:14:49.513 fused_ordering(911) 00:14:49.513 fused_ordering(912) 00:14:49.513 fused_ordering(913) 00:14:49.513 fused_ordering(914) 00:14:49.513 fused_ordering(915) 00:14:49.513 fused_ordering(916) 00:14:49.513 fused_ordering(917) 00:14:49.513 fused_ordering(918) 00:14:49.513 fused_ordering(919) 00:14:49.513 fused_ordering(920) 00:14:49.513 fused_ordering(921) 00:14:49.513 fused_ordering(922) 00:14:49.513 fused_ordering(923) 00:14:49.513 fused_ordering(924) 00:14:49.513 fused_ordering(925) 00:14:49.513 fused_ordering(926) 00:14:49.513 fused_ordering(927) 00:14:49.513 fused_ordering(928) 00:14:49.513 fused_ordering(929) 00:14:49.513 fused_ordering(930) 00:14:49.513 fused_ordering(931) 00:14:49.513 fused_ordering(932) 00:14:49.513 fused_ordering(933) 00:14:49.513 fused_ordering(934) 00:14:49.513 fused_ordering(935) 00:14:49.513 fused_ordering(936) 00:14:49.513 fused_ordering(937) 00:14:49.513 fused_ordering(938) 00:14:49.513 fused_ordering(939) 00:14:49.513 fused_ordering(940) 00:14:49.513 fused_ordering(941) 00:14:49.513 fused_ordering(942) 00:14:49.513 fused_ordering(943) 00:14:49.513 fused_ordering(944) 00:14:49.513 fused_ordering(945) 00:14:49.513 fused_ordering(946) 00:14:49.513 fused_ordering(947) 00:14:49.513 fused_ordering(948) 00:14:49.513 fused_ordering(949) 00:14:49.513 fused_ordering(950) 00:14:49.513 fused_ordering(951) 00:14:49.513 fused_ordering(952) 00:14:49.513 fused_ordering(953) 00:14:49.513 fused_ordering(954) 00:14:49.513 fused_ordering(955) 00:14:49.513 fused_ordering(956) 00:14:49.513 fused_ordering(957) 00:14:49.513 fused_ordering(958) 00:14:49.513 fused_ordering(959) 00:14:49.513 fused_ordering(960) 00:14:49.513 fused_ordering(961) 00:14:49.513 fused_ordering(962) 00:14:49.513 fused_ordering(963) 00:14:49.513 fused_ordering(964) 00:14:49.513 fused_ordering(965) 00:14:49.513 fused_ordering(966) 00:14:49.513 fused_ordering(967) 00:14:49.513 fused_ordering(968) 00:14:49.513 fused_ordering(969) 00:14:49.513 fused_ordering(970) 00:14:49.513 fused_ordering(971) 00:14:49.513 fused_ordering(972) 00:14:49.513 fused_ordering(973) 00:14:49.513 fused_ordering(974) 00:14:49.513 fused_ordering(975) 00:14:49.513 fused_ordering(976) 00:14:49.513 fused_ordering(977) 00:14:49.513 fused_ordering(978) 00:14:49.513 fused_ordering(979) 00:14:49.513 fused_ordering(980) 00:14:49.513 fused_ordering(981) 00:14:49.513 fused_ordering(982) 00:14:49.513 fused_ordering(983) 00:14:49.513 fused_ordering(984) 00:14:49.513 fused_ordering(985) 00:14:49.513 fused_ordering(986) 00:14:49.513 fused_ordering(987) 00:14:49.513 fused_ordering(988) 00:14:49.513 fused_ordering(989) 00:14:49.513 fused_ordering(990) 00:14:49.513 fused_ordering(991) 00:14:49.513 fused_ordering(992) 00:14:49.513 fused_ordering(993) 00:14:49.513 fused_ordering(994) 00:14:49.513 fused_ordering(995) 00:14:49.513 fused_ordering(996) 00:14:49.513 fused_ordering(997) 00:14:49.513 fused_ordering(998) 00:14:49.513 fused_ordering(999) 00:14:49.513 fused_ordering(1000) 00:14:49.513 fused_ordering(1001) 00:14:49.513 fused_ordering(1002) 00:14:49.513 fused_ordering(1003) 00:14:49.513 fused_ordering(1004) 00:14:49.513 fused_ordering(1005) 00:14:49.513 fused_ordering(1006) 00:14:49.513 fused_ordering(1007) 00:14:49.513 fused_ordering(1008) 00:14:49.513 fused_ordering(1009) 00:14:49.513 fused_ordering(1010) 00:14:49.513 fused_ordering(1011) 00:14:49.513 fused_ordering(1012) 00:14:49.513 fused_ordering(1013) 00:14:49.513 fused_ordering(1014) 00:14:49.513 fused_ordering(1015) 00:14:49.513 fused_ordering(1016) 00:14:49.513 fused_ordering(1017) 00:14:49.513 fused_ordering(1018) 00:14:49.513 fused_ordering(1019) 00:14:49.513 fused_ordering(1020) 00:14:49.513 fused_ordering(1021) 00:14:49.513 fused_ordering(1022) 00:14:49.513 fused_ordering(1023) 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.513 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.514 rmmod nvme_tcp 00:14:49.514 rmmod nvme_fabrics 00:14:49.514 rmmod nvme_keyring 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 593244 ']' 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 593244 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 593244 ']' 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 593244 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.514 06:43:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 593244 00:14:49.514 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:49.514 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:49.514 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 593244' 00:14:49.514 killing process with pid 593244 00:14:49.514 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 593244 00:14:49.514 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 593244 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.772 06:43:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.673 06:43:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.673 00:14:51.673 real 0m8.026s 00:14:51.673 user 0m5.637s 00:14:51.673 sys 0m3.744s 00:14:51.673 06:43:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.673 06:43:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.673 ************************************ 00:14:51.673 END TEST nvmf_fused_ordering 00:14:51.673 ************************************ 00:14:51.932 06:43:39 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:51.932 06:43:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:51.932 06:43:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.932 06:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.932 ************************************ 00:14:51.932 START TEST nvmf_delete_subsystem 00:14:51.932 ************************************ 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:51.932 * Looking for test storage... 00:14:51.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.932 06:43:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.835 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:53.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:53.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:53.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:53.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.836 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:14:54.094 00:14:54.094 --- 10.0.0.2 ping statistics --- 00:14:54.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.094 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:14:54.094 00:14:54.094 --- 10.0.0.1 ping statistics --- 00:14:54.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.094 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=595704 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 595704 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 595704 ']' 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.094 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.094 [2024-07-15 06:43:41.578104] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.094 [2024-07-15 06:43:41.578196] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.094 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.094 [2024-07-15 06:43:41.643158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:54.353 [2024-07-15 06:43:41.732257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.353 [2024-07-15 06:43:41.732310] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.353 [2024-07-15 06:43:41.732323] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.353 [2024-07-15 06:43:41.732334] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.353 [2024-07-15 06:43:41.732343] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.353 [2024-07-15 06:43:41.732414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.353 [2024-07-15 06:43:41.732419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 [2024-07-15 06:43:41.878413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 [2024-07-15 06:43:41.894624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 NULL1 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 Delay0 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=595735 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:54.353 06:43:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:54.353 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.612 [2024-07-15 06:43:41.969290] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:56.509 06:43:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.509 06:43:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.509 06:43:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 [2024-07-15 06:43:44.100808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554d40 is same with the state(5) to be set 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 starting I/O failed: -6 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 [2024-07-15 06:43:44.101644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8b4000c00 is same with the state(5) to be set 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Read completed with error (sct=0, sc=8) 00:14:56.509 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Write completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:56.510 Read completed with error (sct=0, sc=8) 00:14:57.881 [2024-07-15 06:43:45.065969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c620 is same with the state(5) to be set 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 [2024-07-15 06:43:45.102065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154fec0 is same with the state(5) to be set 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 [2024-07-15 06:43:45.102990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154fb00 is same with the state(5) to be set 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 [2024-07-15 06:43:45.103582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8b400bfe0 is same with the state(5) to be set 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Write completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 Read completed with error (sct=0, sc=8) 00:14:57.881 [2024-07-15 06:43:45.104271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8b400c600 is same with the state(5) to be set 00:14:57.881 Initializing NVMe Controllers 00:14:57.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.881 Controller IO queue size 128, less than required. 00:14:57.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:57.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:57.881 Initialization complete. Launching workers. 00:14:57.881 ======================================================== 00:14:57.881 Latency(us) 00:14:57.881 Device Information : IOPS MiB/s Average min max 00:14:57.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.17 0.08 897923.74 533.97 1013713.77 00:14:57.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.68 0.08 897293.97 420.81 1014080.72 00:14:57.881 ======================================================== 00:14:57.882 Total : 338.85 0.17 897610.24 420.81 1014080.72 00:14:57.882 00:14:57.882 [2024-07-15 06:43:45.104786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156c620 (9): Bad file descriptor 00:14:57.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:57.882 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.882 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:57.882 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 595735 00:14:57.882 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 595735 00:14:58.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (595735) - No such process 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 595735 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 595735 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 595735 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.140 [2024-07-15 06:43:45.628845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=596133 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:58.140 06:43:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:58.140 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.140 [2024-07-15 06:43:45.692711] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:58.770 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:58.770 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:14:58.770 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.334 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.334 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:14:59.334 06:43:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.591 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.591 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:14:59.591 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.156 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.156 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:15:00.156 06:43:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.721 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.721 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:15:00.721 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.285 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.285 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:15:01.285 06:43:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.285 Initializing NVMe Controllers 00:15:01.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.285 Controller IO queue size 128, less than required. 00:15:01.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:01.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:01.285 Initialization complete. Launching workers. 00:15:01.285 ======================================================== 00:15:01.285 Latency(us) 00:15:01.285 Device Information : IOPS MiB/s Average min max 00:15:01.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004601.01 1000230.35 1041162.79 00:15:01.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004401.54 1000209.83 1041229.25 00:15:01.285 ======================================================== 00:15:01.285 Total : 256.00 0.12 1004501.27 1000209.83 1041229.25 00:15:01.285 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 596133 00:15:01.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (596133) - No such process 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 596133 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.852 rmmod nvme_tcp 00:15:01.852 rmmod nvme_fabrics 00:15:01.852 rmmod nvme_keyring 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 595704 ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 595704 ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 595704' 00:15:01.852 killing process with pid 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 595704 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.852 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.853 06:43:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.385 06:43:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.385 00:15:04.385 real 0m12.164s 00:15:04.385 user 0m27.621s 00:15:04.385 sys 0m2.916s 00:15:04.385 06:43:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:04.385 06:43:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 ************************************ 00:15:04.385 END TEST nvmf_delete_subsystem 00:15:04.385 ************************************ 00:15:04.385 06:43:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.385 06:43:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:04.385 06:43:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:04.385 06:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 ************************************ 00:15:04.385 START TEST nvmf_ns_masking 00:15:04.385 ************************************ 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.385 * Looking for test storage... 00:15:04.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=ee34d8d4-4434-4c99-a813-c24e3185fa43 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.385 06:43:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:06.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:06.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:06.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:06.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:15:06.287 00:15:06.287 --- 10.0.0.2 ping statistics --- 00:15:06.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.287 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:06.287 00:15:06.287 --- 10.0.0.1 ping statistics --- 00:15:06.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.287 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.287 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=598533 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 598533 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 598533 ']' 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.288 06:43:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.288 [2024-07-15 06:43:53.882896] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:06.288 [2024-07-15 06:43:53.882991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.546 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.546 [2024-07-15 06:43:53.957479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.546 [2024-07-15 06:43:54.049661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.546 [2024-07-15 06:43:54.049725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.546 [2024-07-15 06:43:54.049752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.546 [2024-07-15 06:43:54.049766] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.546 [2024-07-15 06:43:54.049778] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.546 [2024-07-15 06:43:54.049867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.546 [2024-07-15 06:43:54.049921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.546 [2024-07-15 06:43:54.049974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.546 [2024-07-15 06:43:54.049976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.803 06:43:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.061 [2024-07-15 06:43:54.434515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.061 06:43:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:07.061 06:43:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:07.061 06:43:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:07.320 Malloc1 00:15:07.320 06:43:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.578 Malloc2 00:15:07.578 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:07.836 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:08.094 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.351 [2024-07-15 06:43:55.733196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ee34d8d4-4434-4c99-a813-c24e3185fa43 -a 10.0.0.2 -s 4420 -i 4 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:08.351 06:43:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.881 [ 0]:0x1 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9211d72555e449e2923fbafd7cab7c90 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9211d72555e449e2923fbafd7cab7c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.881 06:43:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.881 [ 0]:0x1 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9211d72555e449e2923fbafd7cab7c90 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9211d72555e449e2923fbafd7cab7c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.881 [ 1]:0x2 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:10.881 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.146 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.402 06:43:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ee34d8d4-4434-4c99-a813-c24e3185fa43 -a 10.0.0.2 -s 4420 -i 4 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:11.660 06:43:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:13.557 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:13.557 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:13.557 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.558 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:13.558 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.558 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:13.558 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:13.558 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.815 [ 0]:0x2 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.815 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.072 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:14.072 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.072 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:14.072 [ 0]:0x1 00:15:14.072 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.072 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9211d72555e449e2923fbafd7cab7c90 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9211d72555e449e2923fbafd7cab7c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:14.330 [ 1]:0x2 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.330 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.588 06:44:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:14.588 [ 0]:0x2 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.588 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.845 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:14.845 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ee34d8d4-4434-4c99-a813-c24e3185fa43 -a 10.0.0.2 -s 4420 -i 4 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:15.103 06:44:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:17.024 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:17.282 [ 0]:0x1 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9211d72555e449e2923fbafd7cab7c90 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9211d72555e449e2923fbafd7cab7c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:17.282 [ 1]:0x2 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.282 06:44:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:17.540 [ 0]:0x2 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:17.540 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.798 [2024-07-15 06:44:05.372822] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:17.798 request: 00:15:17.798 { 00:15:17.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.798 "nsid": 2, 00:15:17.798 "host": "nqn.2016-06.io.spdk:host1", 00:15:17.798 "method": "nvmf_ns_remove_host", 00:15:17.798 "req_id": 1 00:15:17.798 } 00:15:17.798 Got JSON-RPC error response 00:15:17.798 response: 00:15:17.798 { 00:15:17.798 "code": -32602, 00:15:17.798 "message": "Invalid parameters" 00:15:17.798 } 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.798 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:18.057 [ 0]:0x2 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5e16c6bd99c34bd7bd9a9e7f984b4f25 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5e16c6bd99c34bd7bd9a9e7f984b4f25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.057 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.315 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.315 rmmod nvme_tcp 00:15:18.573 rmmod nvme_fabrics 00:15:18.573 rmmod nvme_keyring 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 598533 ']' 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 598533 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 598533 ']' 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 598533 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 598533 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 598533' 00:15:18.573 killing process with pid 598533 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 598533 00:15:18.573 06:44:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 598533 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.833 06:44:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.732 06:44:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.732 00:15:20.732 real 0m16.784s 00:15:20.732 user 0m52.094s 00:15:20.732 sys 0m3.862s 00:15:20.732 06:44:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.732 06:44:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:20.732 ************************************ 00:15:20.732 END TEST nvmf_ns_masking 00:15:20.732 ************************************ 00:15:20.990 06:44:08 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:20.990 06:44:08 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:20.990 06:44:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:20.990 06:44:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:20.990 06:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.990 ************************************ 00:15:20.990 START TEST nvmf_nvme_cli 00:15:20.990 ************************************ 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:20.990 * Looking for test storage... 00:15:20.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.990 06:44:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.893 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.893 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.893 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:15:22.894 00:15:22.894 --- 10.0.0.2 ping statistics --- 00:15:22.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.894 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:15:22.894 00:15:22.894 --- 10.0.0.1 ping statistics --- 00:15:22.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.894 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=602043 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 602043 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 602043 ']' 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.894 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.894 [2024-07-15 06:44:10.485646] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:22.894 [2024-07-15 06:44:10.485727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.153 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.153 [2024-07-15 06:44:10.558671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.153 [2024-07-15 06:44:10.649185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.153 [2024-07-15 06:44:10.649246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.153 [2024-07-15 06:44:10.649262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.153 [2024-07-15 06:44:10.649276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.153 [2024-07-15 06:44:10.649287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.153 [2024-07-15 06:44:10.649377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.153 [2024-07-15 06:44:10.649448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.153 [2024-07-15 06:44:10.649541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.153 [2024-07-15 06:44:10.649543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 [2024-07-15 06:44:10.806705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 Malloc0 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 Malloc1 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 [2024-07-15 06:44:10.889830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.412 06:44:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:23.670 00:15:23.670 Discovery Log Number of Records 2, Generation counter 2 00:15:23.670 =====Discovery Log Entry 0====== 00:15:23.670 trtype: tcp 00:15:23.670 adrfam: ipv4 00:15:23.670 subtype: current discovery subsystem 00:15:23.670 treq: not required 00:15:23.670 portid: 0 00:15:23.670 trsvcid: 4420 00:15:23.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:23.670 traddr: 10.0.0.2 00:15:23.670 eflags: explicit discovery connections, duplicate discovery information 00:15:23.670 sectype: none 00:15:23.670 =====Discovery Log Entry 1====== 00:15:23.670 trtype: tcp 00:15:23.670 adrfam: ipv4 00:15:23.670 subtype: nvme subsystem 00:15:23.670 treq: not required 00:15:23.670 portid: 0 00:15:23.670 trsvcid: 4420 00:15:23.670 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:23.670 traddr: 10.0.0.2 00:15:23.670 eflags: none 00:15:23.670 sectype: none 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:23.670 06:44:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:24.236 06:44:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.131 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:26.388 /dev/nvme0n1 ]] 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.388 06:44:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:26.646 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.903 rmmod nvme_tcp 00:15:26.903 rmmod nvme_fabrics 00:15:26.903 rmmod nvme_keyring 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 602043 ']' 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 602043 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 602043 ']' 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 602043 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 602043 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 602043' 00:15:26.903 killing process with pid 602043 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 602043 00:15:26.903 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 602043 00:15:27.161 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.162 06:44:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.061 06:44:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.061 00:15:29.061 real 0m8.285s 00:15:29.061 user 0m16.130s 00:15:29.061 sys 0m2.149s 00:15:29.061 06:44:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.061 06:44:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 ************************************ 00:15:29.061 END TEST nvmf_nvme_cli 00:15:29.061 ************************************ 00:15:29.319 06:44:16 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:29.319 06:44:16 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:29.319 06:44:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:29.319 06:44:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:29.319 06:44:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.319 ************************************ 00:15:29.319 START TEST nvmf_vfio_user 00:15:29.319 ************************************ 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:29.319 * Looking for test storage... 00:15:29.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.319 06:44:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=602958 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 602958' 00:15:29.320 Process pid: 602958 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 602958 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 602958 ']' 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:29.320 06:44:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:29.320 [2024-07-15 06:44:16.826899] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:29.320 [2024-07-15 06:44:16.826994] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.320 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.320 [2024-07-15 06:44:16.886250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.578 [2024-07-15 06:44:16.975692] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.578 [2024-07-15 06:44:16.975761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.578 [2024-07-15 06:44:16.975775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.578 [2024-07-15 06:44:16.975801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.578 [2024-07-15 06:44:16.975811] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.578 [2024-07-15 06:44:16.975884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.578 [2024-07-15 06:44:16.975923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.578 [2024-07-15 06:44:16.975975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.578 [2024-07-15 06:44:16.975977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.578 06:44:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:29.578 06:44:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:29.578 06:44:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:30.511 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:31.077 Malloc1 00:15:31.077 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:31.335 06:44:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:31.593 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:31.852 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.852 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:31.852 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:32.110 Malloc2 00:15:32.110 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:32.368 06:44:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:32.627 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:32.913 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:32.913 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:32.913 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.913 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:32.913 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:32.914 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:32.914 [2024-07-15 06:44:20.470560] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:32.914 [2024-07-15 06:44:20.470607] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603388 ] 00:15:32.914 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.176 [2024-07-15 06:44:20.505762] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:33.176 [2024-07-15 06:44:20.514362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.176 [2024-07-15 06:44:20.514391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5778998000 00:15:33.176 [2024-07-15 06:44:20.515356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.516346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.517348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.518354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.519356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.520360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.521362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.522371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.176 [2024-07-15 06:44:20.523377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.176 [2024-07-15 06:44:20.523398] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f577774a000 00:15:33.176 [2024-07-15 06:44:20.524518] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:33.176 [2024-07-15 06:44:20.540476] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:33.176 [2024-07-15 06:44:20.540513] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:33.177 [2024-07-15 06:44:20.543498] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:33.177 [2024-07-15 06:44:20.543557] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:33.177 [2024-07-15 06:44:20.543640] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:33.177 [2024-07-15 06:44:20.543667] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:33.177 [2024-07-15 06:44:20.543677] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:33.177 [2024-07-15 06:44:20.544496] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:33.177 [2024-07-15 06:44:20.544522] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:33.177 [2024-07-15 06:44:20.544536] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:33.177 [2024-07-15 06:44:20.545514] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:33.177 [2024-07-15 06:44:20.545534] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:33.177 [2024-07-15 06:44:20.545548] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.547889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:33.177 [2024-07-15 06:44:20.547910] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.548532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:33.177 [2024-07-15 06:44:20.548567] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:33.177 [2024-07-15 06:44:20.548577] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.548593] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.548703] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:33.177 [2024-07-15 06:44:20.548711] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.548720] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:33.177 [2024-07-15 06:44:20.549544] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:33.177 [2024-07-15 06:44:20.550539] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:33.177 [2024-07-15 06:44:20.551545] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:33.177 [2024-07-15 06:44:20.552534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.177 [2024-07-15 06:44:20.552648] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:33.177 [2024-07-15 06:44:20.553555] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:33.177 [2024-07-15 06:44:20.553574] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:33.177 [2024-07-15 06:44:20.553584] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.553633] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:33.177 [2024-07-15 06:44:20.553660] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.553692] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.177 [2024-07-15 06:44:20.553702] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.177 [2024-07-15 06:44:20.553720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.553775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.553794] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:33.177 [2024-07-15 06:44:20.553803] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:33.177 [2024-07-15 06:44:20.553811] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:33.177 [2024-07-15 06:44:20.553818] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:33.177 [2024-07-15 06:44:20.553826] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:33.177 [2024-07-15 06:44:20.553833] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:33.177 [2024-07-15 06:44:20.553841] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.553870] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.553896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.553915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.553932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.177 [2024-07-15 06:44:20.553945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.177 [2024-07-15 06:44:20.553957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.177 [2024-07-15 06:44:20.553969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.177 [2024-07-15 06:44:20.553978] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.553995] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.554023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.554033] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:33.177 [2024-07-15 06:44:20.554042] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554053] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554066] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.554094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.554180] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554196] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554209] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:33.177 [2024-07-15 06:44:20.554217] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:33.177 [2024-07-15 06:44:20.554242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.554256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.554271] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:33.177 [2024-07-15 06:44:20.554286] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554302] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554314] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.177 [2024-07-15 06:44:20.554322] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.177 [2024-07-15 06:44:20.554331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.554351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.554371] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554385] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.177 [2024-07-15 06:44:20.554404] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.177 [2024-07-15 06:44:20.554413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.177 [2024-07-15 06:44:20.554424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:33.177 [2024-07-15 06:44:20.554437] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554447] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554460] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554470] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554478] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:33.177 [2024-07-15 06:44:20.554486] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:33.177 [2024-07-15 06:44:20.554494] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:33.178 [2024-07-15 06:44:20.554502] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:33.178 [2024-07-15 06:44:20.554530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554652] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:33.178 [2024-07-15 06:44:20.554660] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:33.178 [2024-07-15 06:44:20.554666] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:33.178 [2024-07-15 06:44:20.554672] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:33.178 [2024-07-15 06:44:20.554681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:33.178 [2024-07-15 06:44:20.554698] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:33.178 [2024-07-15 06:44:20.554712] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:33.178 [2024-07-15 06:44:20.554724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554736] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:33.178 [2024-07-15 06:44:20.554744] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.178 [2024-07-15 06:44:20.554753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554764] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:33.178 [2024-07-15 06:44:20.554772] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:33.178 [2024-07-15 06:44:20.554780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:33.178 [2024-07-15 06:44:20.554792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:33.178 [2024-07-15 06:44:20.554842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:33.178 ===================================================== 00:15:33.178 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.178 ===================================================== 00:15:33.178 Controller Capabilities/Features 00:15:33.178 ================================ 00:15:33.178 Vendor ID: 4e58 00:15:33.178 Subsystem Vendor ID: 4e58 00:15:33.178 Serial Number: SPDK1 00:15:33.178 Model Number: SPDK bdev Controller 00:15:33.178 Firmware Version: 24.05.1 00:15:33.178 Recommended Arb Burst: 6 00:15:33.178 IEEE OUI Identifier: 8d 6b 50 00:15:33.178 Multi-path I/O 00:15:33.178 May have multiple subsystem ports: Yes 00:15:33.178 May have multiple controllers: Yes 00:15:33.178 Associated with SR-IOV VF: No 00:15:33.178 Max Data Transfer Size: 131072 00:15:33.178 Max Number of Namespaces: 32 00:15:33.178 Max Number of I/O Queues: 127 00:15:33.178 NVMe Specification Version (VS): 1.3 00:15:33.178 NVMe Specification Version (Identify): 1.3 00:15:33.178 Maximum Queue Entries: 256 00:15:33.178 Contiguous Queues Required: Yes 00:15:33.178 Arbitration Mechanisms Supported 00:15:33.178 Weighted Round Robin: Not Supported 00:15:33.178 Vendor Specific: Not Supported 00:15:33.178 Reset Timeout: 15000 ms 00:15:33.178 Doorbell Stride: 4 bytes 00:15:33.178 NVM Subsystem Reset: Not Supported 00:15:33.178 Command Sets Supported 00:15:33.178 NVM Command Set: Supported 00:15:33.178 Boot Partition: Not Supported 00:15:33.178 Memory Page Size Minimum: 4096 bytes 00:15:33.178 Memory Page Size Maximum: 4096 bytes 00:15:33.178 Persistent Memory Region: Not Supported 00:15:33.178 Optional Asynchronous Events Supported 00:15:33.178 Namespace Attribute Notices: Supported 00:15:33.178 Firmware Activation Notices: Not Supported 00:15:33.178 ANA Change Notices: Not Supported 00:15:33.178 PLE Aggregate Log Change Notices: Not Supported 00:15:33.178 LBA Status Info Alert Notices: Not Supported 00:15:33.178 EGE Aggregate Log Change Notices: Not Supported 00:15:33.178 Normal NVM Subsystem Shutdown event: Not Supported 00:15:33.178 Zone Descriptor Change Notices: Not Supported 00:15:33.178 Discovery Log Change Notices: Not Supported 00:15:33.178 Controller Attributes 00:15:33.178 128-bit Host Identifier: Supported 00:15:33.178 Non-Operational Permissive Mode: Not Supported 00:15:33.178 NVM Sets: Not Supported 00:15:33.178 Read Recovery Levels: Not Supported 00:15:33.178 Endurance Groups: Not Supported 00:15:33.178 Predictable Latency Mode: Not Supported 00:15:33.178 Traffic Based Keep ALive: Not Supported 00:15:33.178 Namespace Granularity: Not Supported 00:15:33.178 SQ Associations: Not Supported 00:15:33.178 UUID List: Not Supported 00:15:33.178 Multi-Domain Subsystem: Not Supported 00:15:33.178 Fixed Capacity Management: Not Supported 00:15:33.178 Variable Capacity Management: Not Supported 00:15:33.178 Delete Endurance Group: Not Supported 00:15:33.178 Delete NVM Set: Not Supported 00:15:33.178 Extended LBA Formats Supported: Not Supported 00:15:33.178 Flexible Data Placement Supported: Not Supported 00:15:33.178 00:15:33.178 Controller Memory Buffer Support 00:15:33.178 ================================ 00:15:33.178 Supported: No 00:15:33.178 00:15:33.178 Persistent Memory Region Support 00:15:33.178 ================================ 00:15:33.178 Supported: No 00:15:33.178 00:15:33.178 Admin Command Set Attributes 00:15:33.178 ============================ 00:15:33.178 Security Send/Receive: Not Supported 00:15:33.178 Format NVM: Not Supported 00:15:33.178 Firmware Activate/Download: Not Supported 00:15:33.178 Namespace Management: Not Supported 00:15:33.178 Device Self-Test: Not Supported 00:15:33.178 Directives: Not Supported 00:15:33.178 NVMe-MI: Not Supported 00:15:33.178 Virtualization Management: Not Supported 00:15:33.178 Doorbell Buffer Config: Not Supported 00:15:33.178 Get LBA Status Capability: Not Supported 00:15:33.178 Command & Feature Lockdown Capability: Not Supported 00:15:33.178 Abort Command Limit: 4 00:15:33.178 Async Event Request Limit: 4 00:15:33.178 Number of Firmware Slots: N/A 00:15:33.178 Firmware Slot 1 Read-Only: N/A 00:15:33.178 Firmware Activation Without Reset: N/A 00:15:33.178 Multiple Update Detection Support: N/A 00:15:33.178 Firmware Update Granularity: No Information Provided 00:15:33.178 Per-Namespace SMART Log: No 00:15:33.178 Asymmetric Namespace Access Log Page: Not Supported 00:15:33.178 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:33.178 Command Effects Log Page: Supported 00:15:33.178 Get Log Page Extended Data: Supported 00:15:33.178 Telemetry Log Pages: Not Supported 00:15:33.178 Persistent Event Log Pages: Not Supported 00:15:33.178 Supported Log Pages Log Page: May Support 00:15:33.178 Commands Supported & Effects Log Page: Not Supported 00:15:33.178 Feature Identifiers & Effects Log Page:May Support 00:15:33.178 NVMe-MI Commands & Effects Log Page: May Support 00:15:33.178 Data Area 4 for Telemetry Log: Not Supported 00:15:33.178 Error Log Page Entries Supported: 128 00:15:33.178 Keep Alive: Supported 00:15:33.178 Keep Alive Granularity: 10000 ms 00:15:33.178 00:15:33.178 NVM Command Set Attributes 00:15:33.178 ========================== 00:15:33.178 Submission Queue Entry Size 00:15:33.178 Max: 64 00:15:33.178 Min: 64 00:15:33.178 Completion Queue Entry Size 00:15:33.178 Max: 16 00:15:33.178 Min: 16 00:15:33.178 Number of Namespaces: 32 00:15:33.178 Compare Command: Supported 00:15:33.178 Write Uncorrectable Command: Not Supported 00:15:33.178 Dataset Management Command: Supported 00:15:33.178 Write Zeroes Command: Supported 00:15:33.178 Set Features Save Field: Not Supported 00:15:33.178 Reservations: Not Supported 00:15:33.178 Timestamp: Not Supported 00:15:33.178 Copy: Supported 00:15:33.178 Volatile Write Cache: Present 00:15:33.178 Atomic Write Unit (Normal): 1 00:15:33.178 Atomic Write Unit (PFail): 1 00:15:33.178 Atomic Compare & Write Unit: 1 00:15:33.178 Fused Compare & Write: Supported 00:15:33.178 Scatter-Gather List 00:15:33.178 SGL Command Set: Supported (Dword aligned) 00:15:33.178 SGL Keyed: Not Supported 00:15:33.178 SGL Bit Bucket Descriptor: Not Supported 00:15:33.178 SGL Metadata Pointer: Not Supported 00:15:33.178 Oversized SGL: Not Supported 00:15:33.178 SGL Metadata Address: Not Supported 00:15:33.178 SGL Offset: Not Supported 00:15:33.178 Transport SGL Data Block: Not Supported 00:15:33.178 Replay Protected Memory Block: Not Supported 00:15:33.178 00:15:33.178 Firmware Slot Information 00:15:33.178 ========================= 00:15:33.178 Active slot: 1 00:15:33.178 Slot 1 Firmware Revision: 24.05.1 00:15:33.178 00:15:33.178 00:15:33.178 Commands Supported and Effects 00:15:33.178 ============================== 00:15:33.178 Admin Commands 00:15:33.179 -------------- 00:15:33.179 Get Log Page (02h): Supported 00:15:33.179 Identify (06h): Supported 00:15:33.179 Abort (08h): Supported 00:15:33.179 Set Features (09h): Supported 00:15:33.179 Get Features (0Ah): Supported 00:15:33.179 Asynchronous Event Request (0Ch): Supported 00:15:33.179 Keep Alive (18h): Supported 00:15:33.179 I/O Commands 00:15:33.179 ------------ 00:15:33.179 Flush (00h): Supported LBA-Change 00:15:33.179 Write (01h): Supported LBA-Change 00:15:33.179 Read (02h): Supported 00:15:33.179 Compare (05h): Supported 00:15:33.179 Write Zeroes (08h): Supported LBA-Change 00:15:33.179 Dataset Management (09h): Supported LBA-Change 00:15:33.179 Copy (19h): Supported LBA-Change 00:15:33.179 Unknown (79h): Supported LBA-Change 00:15:33.179 Unknown (7Ah): Supported 00:15:33.179 00:15:33.179 Error Log 00:15:33.179 ========= 00:15:33.179 00:15:33.179 Arbitration 00:15:33.179 =========== 00:15:33.179 Arbitration Burst: 1 00:15:33.179 00:15:33.179 Power Management 00:15:33.179 ================ 00:15:33.179 Number of Power States: 1 00:15:33.179 Current Power State: Power State #0 00:15:33.179 Power State #0: 00:15:33.179 Max Power: 0.00 W 00:15:33.179 Non-Operational State: Operational 00:15:33.179 Entry Latency: Not Reported 00:15:33.179 Exit Latency: Not Reported 00:15:33.179 Relative Read Throughput: 0 00:15:33.179 Relative Read Latency: 0 00:15:33.179 Relative Write Throughput: 0 00:15:33.179 Relative Write Latency: 0 00:15:33.179 Idle Power: Not Reported 00:15:33.179 Active Power: Not Reported 00:15:33.179 Non-Operational Permissive Mode: Not Supported 00:15:33.179 00:15:33.179 Health Information 00:15:33.179 ================== 00:15:33.179 Critical Warnings: 00:15:33.179 Available Spare Space: OK 00:15:33.179 Temperature: OK 00:15:33.179 Device Reliability: OK 00:15:33.179 Read Only: No 00:15:33.179 Volatile Memory Backup: OK 00:15:33.179 Current Temperature: 0 Kelvin[2024-07-15 06:44:20.555004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:33.179 [2024-07-15 06:44:20.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:33.179 [2024-07-15 06:44:20.555062] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:33.179 [2024-07-15 06:44:20.555079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.179 [2024-07-15 06:44:20.555090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.179 [2024-07-15 06:44:20.555100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.179 [2024-07-15 06:44:20.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.179 [2024-07-15 06:44:20.556892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:33.179 [2024-07-15 06:44:20.556922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:33.179 [2024-07-15 06:44:20.557570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.179 [2024-07-15 06:44:20.557653] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:33.179 [2024-07-15 06:44:20.557667] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:33.179 [2024-07-15 06:44:20.558575] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:33.179 [2024-07-15 06:44:20.558599] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:33.179 [2024-07-15 06:44:20.558651] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:33.179 [2024-07-15 06:44:20.561891] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:33.179 (-273 Celsius) 00:15:33.179 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:33.179 Available Spare: 0% 00:15:33.179 Available Spare Threshold: 0% 00:15:33.179 Life Percentage Used: 0% 00:15:33.179 Data Units Read: 0 00:15:33.179 Data Units Written: 0 00:15:33.179 Host Read Commands: 0 00:15:33.179 Host Write Commands: 0 00:15:33.179 Controller Busy Time: 0 minutes 00:15:33.179 Power Cycles: 0 00:15:33.179 Power On Hours: 0 hours 00:15:33.179 Unsafe Shutdowns: 0 00:15:33.179 Unrecoverable Media Errors: 0 00:15:33.179 Lifetime Error Log Entries: 0 00:15:33.179 Warning Temperature Time: 0 minutes 00:15:33.179 Critical Temperature Time: 0 minutes 00:15:33.179 00:15:33.179 Number of Queues 00:15:33.179 ================ 00:15:33.179 Number of I/O Submission Queues: 127 00:15:33.179 Number of I/O Completion Queues: 127 00:15:33.179 00:15:33.179 Active Namespaces 00:15:33.179 ================= 00:15:33.179 Namespace ID:1 00:15:33.179 Error Recovery Timeout: Unlimited 00:15:33.179 Command Set Identifier: NVM (00h) 00:15:33.179 Deallocate: Supported 00:15:33.179 Deallocated/Unwritten Error: Not Supported 00:15:33.179 Deallocated Read Value: Unknown 00:15:33.179 Deallocate in Write Zeroes: Not Supported 00:15:33.179 Deallocated Guard Field: 0xFFFF 00:15:33.179 Flush: Supported 00:15:33.179 Reservation: Supported 00:15:33.179 Namespace Sharing Capabilities: Multiple Controllers 00:15:33.179 Size (in LBAs): 131072 (0GiB) 00:15:33.179 Capacity (in LBAs): 131072 (0GiB) 00:15:33.179 Utilization (in LBAs): 131072 (0GiB) 00:15:33.179 NGUID: A8294F0E97074DD8A9D7462F8F544FF1 00:15:33.179 UUID: a8294f0e-9707-4dd8-a9d7-462f8f544ff1 00:15:33.179 Thin Provisioning: Not Supported 00:15:33.179 Per-NS Atomic Units: Yes 00:15:33.179 Atomic Boundary Size (Normal): 0 00:15:33.179 Atomic Boundary Size (PFail): 0 00:15:33.179 Atomic Boundary Offset: 0 00:15:33.179 Maximum Single Source Range Length: 65535 00:15:33.179 Maximum Copy Length: 65535 00:15:33.179 Maximum Source Range Count: 1 00:15:33.179 NGUID/EUI64 Never Reused: No 00:15:33.179 Namespace Write Protected: No 00:15:33.179 Number of LBA Formats: 1 00:15:33.179 Current LBA Format: LBA Format #00 00:15:33.179 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:33.179 00:15:33.179 06:44:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:33.179 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.439 [2024-07-15 06:44:20.792752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:38.716 Initializing NVMe Controllers 00:15:38.716 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.716 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:38.716 Initialization complete. Launching workers. 00:15:38.716 ======================================================== 00:15:38.716 Latency(us) 00:15:38.716 Device Information : IOPS MiB/s Average min max 00:15:38.716 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35870.40 140.12 3569.48 1152.13 8986.83 00:15:38.716 ======================================================== 00:15:38.716 Total : 35870.40 140.12 3569.48 1152.13 8986.83 00:15:38.716 00:15:38.716 [2024-07-15 06:44:25.815371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.716 06:44:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:38.716 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.716 [2024-07-15 06:44:26.044458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.990 Initializing NVMe Controllers 00:15:43.990 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:43.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:43.990 Initialization complete. Launching workers. 00:15:43.990 ======================================================== 00:15:43.990 Latency(us) 00:15:43.990 Device Information : IOPS MiB/s Average min max 00:15:43.990 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.27 5957.71 11982.11 00:15:43.990 ======================================================== 00:15:43.990 Total : 16051.20 62.70 7984.27 5957.71 11982.11 00:15:43.990 00:15:43.990 [2024-07-15 06:44:31.081124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.990 06:44:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:43.990 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.990 [2024-07-15 06:44:31.295225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:49.257 [2024-07-15 06:44:36.367217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.257 Initializing NVMe Controllers 00:15:49.257 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:49.257 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:49.257 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:49.257 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:49.257 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:49.257 Initialization complete. Launching workers. 00:15:49.257 Starting thread on core 2 00:15:49.257 Starting thread on core 3 00:15:49.257 Starting thread on core 1 00:15:49.257 06:44:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:49.257 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.257 [2024-07-15 06:44:36.669417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.540 [2024-07-15 06:44:39.883147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.540 Initializing NVMe Controllers 00:15:52.540 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.540 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.540 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:52.540 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:52.540 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:52.540 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:52.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:52.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:52.540 Initialization complete. Launching workers. 00:15:52.540 Starting thread on core 1 with urgent priority queue 00:15:52.540 Starting thread on core 2 with urgent priority queue 00:15:52.540 Starting thread on core 3 with urgent priority queue 00:15:52.540 Starting thread on core 0 with urgent priority queue 00:15:52.540 SPDK bdev Controller (SPDK1 ) core 0: 2472.00 IO/s 40.45 secs/100000 ios 00:15:52.540 SPDK bdev Controller (SPDK1 ) core 1: 3001.00 IO/s 33.32 secs/100000 ios 00:15:52.540 SPDK bdev Controller (SPDK1 ) core 2: 2849.33 IO/s 35.10 secs/100000 ios 00:15:52.540 SPDK bdev Controller (SPDK1 ) core 3: 2832.33 IO/s 35.31 secs/100000 ios 00:15:52.540 ======================================================== 00:15:52.540 00:15:52.540 06:44:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:52.540 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.799 [2024-07-15 06:44:40.185408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.799 Initializing NVMe Controllers 00:15:52.799 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.799 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.799 Namespace ID: 1 size: 0GB 00:15:52.799 Initialization complete. 00:15:52.799 INFO: using host memory buffer for IO 00:15:52.799 Hello world! 00:15:52.799 [2024-07-15 06:44:40.221023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.799 06:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:52.799 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.059 [2024-07-15 06:44:40.518363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:53.993 Initializing NVMe Controllers 00:15:53.993 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:53.993 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:53.993 Initialization complete. Launching workers. 00:15:53.993 submit (in ns) avg, min, max = 9235.9, 3496.7, 4021800.0 00:15:53.993 complete (in ns) avg, min, max = 22486.7, 2063.3, 4016184.4 00:15:53.993 00:15:53.993 Submit histogram 00:15:53.993 ================ 00:15:53.993 Range in us Cumulative Count 00:15:53.993 3.484 - 3.508: 0.2616% ( 35) 00:15:53.993 3.508 - 3.532: 1.2108% ( 127) 00:15:53.993 3.532 - 3.556: 3.2287% ( 270) 00:15:53.993 3.556 - 3.579: 8.5800% ( 716) 00:15:53.993 3.579 - 3.603: 16.7862% ( 1098) 00:15:53.993 3.603 - 3.627: 25.7773% ( 1203) 00:15:53.993 3.627 - 3.650: 34.2601% ( 1135) 00:15:53.993 3.650 - 3.674: 42.5187% ( 1105) 00:15:53.993 3.674 - 3.698: 49.4843% ( 932) 00:15:53.993 3.698 - 3.721: 55.3139% ( 780) 00:15:53.993 3.721 - 3.745: 59.0060% ( 494) 00:15:53.993 3.745 - 3.769: 62.7354% ( 499) 00:15:53.993 3.769 - 3.793: 65.7175% ( 399) 00:15:53.993 3.793 - 3.816: 69.3946% ( 492) 00:15:53.993 3.816 - 3.840: 72.6981% ( 442) 00:15:53.993 3.840 - 3.864: 76.9507% ( 569) 00:15:53.993 3.864 - 3.887: 81.0314% ( 546) 00:15:53.993 3.887 - 3.911: 84.0359% ( 402) 00:15:53.993 3.911 - 3.935: 86.6442% ( 349) 00:15:53.993 3.935 - 3.959: 88.3184% ( 224) 00:15:53.993 3.959 - 3.982: 89.8729% ( 208) 00:15:53.993 3.982 - 4.006: 91.2481% ( 184) 00:15:53.993 4.006 - 4.030: 92.2870% ( 139) 00:15:53.993 4.030 - 4.053: 93.2810% ( 133) 00:15:53.993 4.053 - 4.077: 94.0807% ( 107) 00:15:53.993 4.077 - 4.101: 94.8505% ( 103) 00:15:53.993 4.101 - 4.124: 95.4335% ( 78) 00:15:53.993 4.124 - 4.148: 95.9567% ( 70) 00:15:53.993 4.148 - 4.172: 96.2855% ( 44) 00:15:53.993 4.172 - 4.196: 96.5022% ( 29) 00:15:53.993 4.196 - 4.219: 96.7265% ( 30) 00:15:53.993 4.219 - 4.243: 96.7937% ( 9) 00:15:53.993 4.243 - 4.267: 96.9581% ( 22) 00:15:53.993 4.267 - 4.290: 97.0254% ( 9) 00:15:53.993 4.290 - 4.314: 97.0777% ( 7) 00:15:53.993 4.314 - 4.338: 97.1749% ( 13) 00:15:53.993 4.338 - 4.361: 97.2422% ( 9) 00:15:53.993 4.361 - 4.385: 97.3244% ( 11) 00:15:53.993 4.385 - 4.409: 97.3991% ( 10) 00:15:53.993 4.409 - 4.433: 97.4141% ( 2) 00:15:53.993 4.433 - 4.456: 97.4589% ( 6) 00:15:53.993 4.456 - 4.480: 97.4738% ( 2) 00:15:53.993 4.480 - 4.504: 97.4813% ( 1) 00:15:53.993 4.504 - 4.527: 97.4888% ( 1) 00:15:53.994 4.527 - 4.551: 97.4963% ( 1) 00:15:53.994 4.551 - 4.575: 97.5112% ( 2) 00:15:53.994 4.575 - 4.599: 97.5187% ( 1) 00:15:53.994 4.622 - 4.646: 97.5336% ( 2) 00:15:53.994 4.646 - 4.670: 97.5561% ( 3) 00:15:53.994 4.670 - 4.693: 97.5710% ( 2) 00:15:53.994 4.693 - 4.717: 97.6009% ( 4) 00:15:53.994 4.717 - 4.741: 97.6532% ( 7) 00:15:53.994 4.741 - 4.764: 97.7130% ( 8) 00:15:53.994 4.764 - 4.788: 97.7803% ( 9) 00:15:53.994 4.788 - 4.812: 97.8176% ( 5) 00:15:53.994 4.812 - 4.836: 97.8475% ( 4) 00:15:53.994 4.836 - 4.859: 97.8625% ( 2) 00:15:53.994 4.859 - 4.883: 97.9297% ( 9) 00:15:53.994 4.883 - 4.907: 97.9746% ( 6) 00:15:53.994 4.907 - 4.930: 98.0045% ( 4) 00:15:53.994 4.930 - 4.954: 98.0344% ( 4) 00:15:53.994 4.954 - 4.978: 98.0643% ( 4) 00:15:53.994 4.978 - 5.001: 98.0867% ( 3) 00:15:53.994 5.001 - 5.025: 98.1016% ( 2) 00:15:53.994 5.025 - 5.049: 98.1390% ( 5) 00:15:53.994 5.049 - 5.073: 98.1540% ( 2) 00:15:53.994 5.073 - 5.096: 98.1614% ( 1) 00:15:53.994 5.120 - 5.144: 98.1689% ( 1) 00:15:53.994 5.167 - 5.191: 98.1764% ( 1) 00:15:53.994 5.191 - 5.215: 98.1839% ( 1) 00:15:53.994 5.215 - 5.239: 98.1913% ( 1) 00:15:53.994 5.239 - 5.262: 98.1988% ( 1) 00:15:53.994 5.286 - 5.310: 98.2063% ( 1) 00:15:53.994 5.310 - 5.333: 98.2138% ( 1) 00:15:53.994 5.333 - 5.357: 98.2212% ( 1) 00:15:53.994 5.357 - 5.381: 98.2362% ( 2) 00:15:53.994 5.452 - 5.476: 98.2436% ( 1) 00:15:53.994 5.523 - 5.547: 98.2511% ( 1) 00:15:53.994 5.641 - 5.665: 98.2586% ( 1) 00:15:53.994 5.713 - 5.736: 98.2661% ( 1) 00:15:53.994 5.879 - 5.902: 98.2810% ( 2) 00:15:53.994 5.950 - 5.973: 98.2885% ( 1) 00:15:53.994 6.210 - 6.258: 98.2960% ( 1) 00:15:53.994 6.400 - 6.447: 98.3109% ( 2) 00:15:53.994 6.495 - 6.542: 98.3259% ( 2) 00:15:53.994 6.684 - 6.732: 98.3333% ( 1) 00:15:53.994 6.827 - 6.874: 98.3408% ( 1) 00:15:53.994 6.921 - 6.969: 98.3483% ( 1) 00:15:53.994 7.064 - 7.111: 98.3782% ( 4) 00:15:53.994 7.111 - 7.159: 98.3857% ( 1) 00:15:53.994 7.206 - 7.253: 98.3931% ( 1) 00:15:53.994 7.253 - 7.301: 98.4006% ( 1) 00:15:53.994 7.301 - 7.348: 98.4081% ( 1) 00:15:53.994 7.348 - 7.396: 98.4230% ( 2) 00:15:53.994 7.443 - 7.490: 98.4454% ( 3) 00:15:53.994 7.490 - 7.538: 98.4529% ( 1) 00:15:53.994 7.538 - 7.585: 98.4679% ( 2) 00:15:53.994 7.727 - 7.775: 98.4753% ( 1) 00:15:53.994 7.775 - 7.822: 98.4828% ( 1) 00:15:53.994 7.822 - 7.870: 98.4903% ( 1) 00:15:53.994 7.870 - 7.917: 98.4978% ( 1) 00:15:53.994 7.917 - 7.964: 98.5052% ( 1) 00:15:53.994 7.964 - 8.012: 98.5202% ( 2) 00:15:53.994 8.059 - 8.107: 98.5277% ( 1) 00:15:53.994 8.107 - 8.154: 98.5351% ( 1) 00:15:53.994 8.249 - 8.296: 98.5426% ( 1) 00:15:53.994 8.391 - 8.439: 98.5501% ( 1) 00:15:53.994 8.486 - 8.533: 98.5650% ( 2) 00:15:53.994 8.533 - 8.581: 98.5725% ( 1) 00:15:53.994 8.628 - 8.676: 98.5800% ( 1) 00:15:53.994 8.676 - 8.723: 98.5874% ( 1) 00:15:53.994 8.818 - 8.865: 98.5949% ( 1) 00:15:53.994 8.865 - 8.913: 98.6099% ( 2) 00:15:53.994 9.102 - 9.150: 98.6173% ( 1) 00:15:53.994 9.292 - 9.339: 98.6248% ( 1) 00:15:53.994 9.339 - 9.387: 98.6323% ( 1) 00:15:53.994 9.529 - 9.576: 98.6472% ( 2) 00:15:53.994 9.766 - 9.813: 98.6547% ( 1) 00:15:53.994 9.908 - 9.956: 98.6697% ( 2) 00:15:53.994 10.003 - 10.050: 98.6846% ( 2) 00:15:53.994 10.145 - 10.193: 98.6921% ( 1) 00:15:53.994 10.430 - 10.477: 98.6996% ( 1) 00:15:53.994 10.524 - 10.572: 98.7070% ( 1) 00:15:53.994 11.093 - 11.141: 98.7145% ( 1) 00:15:53.994 11.141 - 11.188: 98.7220% ( 1) 00:15:53.994 11.283 - 11.330: 98.7294% ( 1) 00:15:53.994 11.473 - 11.520: 98.7369% ( 1) 00:15:53.994 11.899 - 11.947: 98.7519% ( 2) 00:15:53.994 11.994 - 12.041: 98.7593% ( 1) 00:15:53.994 12.326 - 12.421: 98.7668% ( 1) 00:15:53.994 12.421 - 12.516: 98.7818% ( 2) 00:15:53.994 12.516 - 12.610: 98.8042% ( 3) 00:15:53.994 12.895 - 12.990: 98.8266% ( 3) 00:15:53.994 13.084 - 13.179: 98.8341% ( 1) 00:15:53.994 13.274 - 13.369: 98.8416% ( 1) 00:15:53.994 13.369 - 13.464: 98.8490% ( 1) 00:15:53.994 13.748 - 13.843: 98.8565% ( 1) 00:15:53.994 13.938 - 14.033: 98.8640% ( 1) 00:15:53.994 14.412 - 14.507: 98.8714% ( 1) 00:15:53.994 14.507 - 14.601: 98.8864% ( 2) 00:15:53.994 14.791 - 14.886: 98.8939% ( 1) 00:15:53.994 15.170 - 15.265: 98.9013% ( 1) 00:15:53.994 15.360 - 15.455: 98.9088% ( 1) 00:15:53.994 15.739 - 15.834: 98.9163% ( 1) 00:15:53.994 17.161 - 17.256: 98.9238% ( 1) 00:15:53.994 17.351 - 17.446: 98.9611% ( 5) 00:15:53.994 17.446 - 17.541: 98.9836% ( 3) 00:15:53.994 17.541 - 17.636: 99.0433% ( 8) 00:15:53.994 17.636 - 17.730: 99.0658% ( 3) 00:15:53.994 17.730 - 17.825: 99.1031% ( 5) 00:15:53.994 17.825 - 17.920: 99.1405% ( 5) 00:15:53.994 17.920 - 18.015: 99.1854% ( 6) 00:15:53.994 18.015 - 18.110: 99.2227% ( 5) 00:15:53.994 18.110 - 18.204: 99.2975% ( 10) 00:15:53.994 18.204 - 18.299: 99.3647% ( 9) 00:15:53.994 18.299 - 18.394: 99.4245% ( 8) 00:15:53.994 18.394 - 18.489: 99.5067% ( 11) 00:15:53.994 18.489 - 18.584: 99.5815% ( 10) 00:15:53.994 18.584 - 18.679: 99.6487% ( 9) 00:15:53.994 18.679 - 18.773: 99.6861% ( 5) 00:15:53.994 18.773 - 18.868: 99.7010% ( 2) 00:15:53.994 18.868 - 18.963: 99.7384% ( 5) 00:15:53.994 18.963 - 19.058: 99.7608% ( 3) 00:15:53.994 19.153 - 19.247: 99.7683% ( 1) 00:15:53.994 19.532 - 19.627: 99.7758% ( 1) 00:15:53.994 19.721 - 19.816: 99.7907% ( 2) 00:15:53.994 20.006 - 20.101: 99.7982% ( 1) 00:15:53.994 21.428 - 21.523: 99.8132% ( 2) 00:15:53.994 21.807 - 21.902: 99.8206% ( 1) 00:15:53.994 21.902 - 21.997: 99.8281% ( 1) 00:15:53.994 24.652 - 24.841: 99.8356% ( 1) 00:15:53.994 24.841 - 25.031: 99.8430% ( 1) 00:15:53.994 27.876 - 28.065: 99.8505% ( 1) 00:15:53.994 28.824 - 29.013: 99.8580% ( 1) 00:15:53.994 32.996 - 33.185: 99.8655% ( 1) 00:15:53.994 3009.801 - 3021.938: 99.8729% ( 1) 00:15:53.994 3980.705 - 4004.978: 99.9925% ( 16) 00:15:53.994 4004.978 - 4029.250: 100.0000% ( 1) 00:15:53.994 00:15:53.994 Complete histogram 00:15:53.994 ================== 00:15:53.994 Range in us Cumulative Count 00:15:53.994 2.062 - 2.074: 10.2392% ( 1370) 00:15:53.994 2.074 - 2.086: 36.4649% ( 3509) 00:15:53.994 2.086 - 2.098: 39.3199% ( 382) 00:15:53.994 2.098 - 2.110: 49.6786% ( 1386) 00:15:53.994 2.110 - 2.121: 59.1555% ( 1268) 00:15:53.994 2.121 - 2.133: 60.7250% ( 210) 00:15:53.994 2.133 - 2.145: 69.1330% ( 1125) 00:15:53.994 2.145 - 2.157: 74.3946% ( 704) 00:15:53.994 2.157 - 2.169: 75.3139% ( 123) 00:15:53.994 2.169 - 2.181: 79.3274% ( 537) 00:15:53.994 2.181 - 2.193: 81.5620% ( 299) 00:15:53.994 2.193 - 2.204: 82.1898% ( 84) 00:15:53.994 2.204 - 2.216: 85.0299% ( 380) 00:15:53.994 2.216 - 2.228: 87.8027% ( 371) 00:15:53.994 2.228 - 2.240: 90.0000% ( 294) 00:15:53.994 2.240 - 2.252: 92.0179% ( 270) 00:15:53.994 2.252 - 2.264: 93.3707% ( 181) 00:15:53.994 2.264 - 2.276: 93.6697% ( 40) 00:15:53.994 2.276 - 2.287: 94.0807% ( 55) 00:15:53.994 2.287 - 2.299: 94.5964% ( 69) 00:15:53.994 2.299 - 2.311: 95.2317% ( 85) 00:15:53.994 2.311 - 2.323: 95.6203% ( 52) 00:15:53.994 2.323 - 2.335: 95.7175% ( 13) 00:15:53.994 2.335 - 2.347: 95.7399% ( 3) 00:15:53.994 2.347 - 2.359: 95.8296% ( 12) 00:15:53.994 2.359 - 2.370: 96.0538% ( 30) 00:15:53.994 2.370 - 2.382: 96.3976% ( 46) 00:15:53.994 2.382 - 2.394: 96.8610% ( 62) 00:15:53.994 2.394 - 2.406: 97.1674% ( 41) 00:15:53.994 2.406 - 2.418: 97.3318% ( 22) 00:15:53.994 2.418 - 2.430: 97.4514% ( 16) 00:15:53.994 2.430 - 2.441: 97.5859% ( 18) 00:15:53.994 2.441 - 2.453: 97.7130% ( 17) 00:15:53.994 2.453 - 2.465: 97.8251% ( 15) 00:15:53.994 2.465 - 2.477: 97.9596% ( 18) 00:15:53.994 2.477 - 2.489: 98.0792% ( 16) 00:15:53.994 2.489 - 2.501: 98.1839% ( 14) 00:15:53.994 2.501 - 2.513: 98.2735% ( 12) 00:15:53.994 2.513 - 2.524: 98.3109% ( 5) 00:15:53.994 2.524 - 2.536: 98.3558% ( 6) 00:15:53.994 2.536 - 2.548: 98.4155% ( 8) 00:15:53.994 2.548 - 2.560: 98.4380% ( 3) 00:15:53.994 2.560 - 2.572: 98.4679% ( 4) 00:15:53.994 2.572 - 2.584: 98.4828% ( 2) 00:15:53.994 2.584 - 2.596: 98.4903% ( 1) 00:15:53.994 2.596 - 2.607: 98.4978% ( 1) 00:15:53.994 2.607 - 2.619: 98.5127% ( 2) 00:15:53.994 2.619 - 2.631: 98.5277% ( 2) 00:15:53.994 2.631 - 2.643: 98.5351% ( 1) 00:15:53.994 2.667 - 2.679: 98.5426% ( 1) 00:15:53.994 2.690 - 2.702: 98.5501% ( 1) 00:15:53.994 2.702 - 2.714: 98.5575% ( 1) 00:15:53.994 2.726 - 2.738: 98.5650% ( 1) 00:15:53.994 3.390 - 3.413: 98.5800% ( 2) 00:15:53.994 3.437 - 3.461: 98.5949% ( 2) 00:15:53.994 3.461 - 3.484: 98.6173% ( 3) 00:15:53.994 3.484 - 3.508: 98.6248% ( 1) 00:15:53.994 3.508 - 3.532: 98.6323% ( 1) 00:15:53.994 3.556 - 3.579: 98.6472% ( 2) 00:15:53.994 3.579 - 3.603: 98.6547% ( 1) 00:15:53.994 3.603 - 3.627: 98.6771% ( 3) 00:15:53.994 3.627 - 3.650: 98.7145% ( 5) 00:15:53.994 3.650 - 3.674: 98.7220% ( 1) 00:15:53.995 3.698 - 3.721: 98.7369% ( 2) 00:15:53.995 3.769 - 3.793: 98.7519% ( 2) 00:15:53.995 3.793 - 3.816: 98.7593% ( 1) 00:15:53.995 3.840 - 3.864: 98.7743% ( 2) 00:15:53.995 3.864 - 3.887: 98.7818% ( 1) 00:15:53.995 3.911 - 3.935: 98.7892% ( 1) 00:15:53.995 3.935 - 3.959: 98.8042% ( 2) 00:15:53.995 3.959 - 3.982: 98.8117% ( 1) 00:15:53.995 4.124 - 4.148: 98.8191% ( 1) 00:15:53.995 5.096 - 5.120: 98.8266% ( 1) 00:15:53.995 5.167 - 5.191: 9[2024-07-15 06:44:41.539485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:53.995 8.8341% ( 1) 00:15:53.995 5.333 - 5.357: 98.8416% ( 1) 00:15:53.995 5.499 - 5.523: 98.8490% ( 1) 00:15:53.995 5.618 - 5.641: 98.8565% ( 1) 00:15:53.995 5.641 - 5.665: 98.8789% ( 3) 00:15:53.995 5.997 - 6.021: 98.8864% ( 1) 00:15:53.995 6.068 - 6.116: 98.8939% ( 1) 00:15:53.995 6.163 - 6.210: 98.9088% ( 2) 00:15:53.995 6.210 - 6.258: 98.9163% ( 1) 00:15:53.995 6.353 - 6.400: 98.9238% ( 1) 00:15:53.995 6.400 - 6.447: 98.9312% ( 1) 00:15:53.995 6.447 - 6.495: 98.9387% ( 1) 00:15:53.995 6.495 - 6.542: 98.9462% ( 1) 00:15:53.995 6.732 - 6.779: 98.9537% ( 1) 00:15:53.995 6.969 - 7.016: 98.9611% ( 1) 00:15:53.995 7.206 - 7.253: 98.9686% ( 1) 00:15:53.995 7.490 - 7.538: 98.9761% ( 1) 00:15:53.995 9.339 - 9.387: 98.9836% ( 1) 00:15:53.995 9.813 - 9.861: 98.9910% ( 1) 00:15:53.995 15.360 - 15.455: 98.9985% ( 1) 00:15:53.995 15.550 - 15.644: 99.0060% ( 1) 00:15:53.995 15.644 - 15.739: 99.0359% ( 4) 00:15:53.995 15.739 - 15.834: 99.0433% ( 1) 00:15:53.995 15.834 - 15.929: 99.0508% ( 1) 00:15:53.995 15.929 - 16.024: 99.0807% ( 4) 00:15:53.995 16.024 - 16.119: 99.0957% ( 2) 00:15:53.995 16.119 - 16.213: 99.1405% ( 6) 00:15:53.995 16.213 - 16.308: 99.1480% ( 1) 00:15:53.995 16.308 - 16.403: 99.2003% ( 7) 00:15:53.995 16.403 - 16.498: 99.2601% ( 8) 00:15:53.995 16.498 - 16.593: 99.2825% ( 3) 00:15:53.995 16.593 - 16.687: 99.3199% ( 5) 00:15:53.995 16.687 - 16.782: 99.3423% ( 3) 00:15:53.995 16.782 - 16.877: 99.3647% ( 3) 00:15:53.995 16.877 - 16.972: 99.3797% ( 2) 00:15:53.995 16.972 - 17.067: 99.3946% ( 2) 00:15:53.995 17.067 - 17.161: 99.4021% ( 1) 00:15:53.995 17.161 - 17.256: 99.4245% ( 3) 00:15:53.995 17.256 - 17.351: 99.4395% ( 2) 00:15:53.995 17.351 - 17.446: 99.4469% ( 1) 00:15:53.995 17.446 - 17.541: 99.4544% ( 1) 00:15:53.995 17.825 - 17.920: 99.4619% ( 1) 00:15:53.995 17.920 - 18.015: 99.4694% ( 1) 00:15:53.995 18.204 - 18.299: 99.4768% ( 1) 00:15:53.995 18.394 - 18.489: 99.4843% ( 1) 00:15:53.995 21.428 - 21.523: 99.4918% ( 1) 00:15:53.995 3252.527 - 3276.800: 99.4993% ( 1) 00:15:53.995 3980.705 - 4004.978: 99.9327% ( 58) 00:15:53.995 4004.978 - 4029.250: 100.0000% ( 9) 00:15:53.995 00:15:53.995 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:53.995 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:53.995 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:53.995 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:53.995 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:54.259 [ 00:15:54.259 { 00:15:54.259 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:54.259 "subtype": "Discovery", 00:15:54.259 "listen_addresses": [], 00:15:54.259 "allow_any_host": true, 00:15:54.259 "hosts": [] 00:15:54.259 }, 00:15:54.259 { 00:15:54.259 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:54.259 "subtype": "NVMe", 00:15:54.259 "listen_addresses": [ 00:15:54.259 { 00:15:54.259 "trtype": "VFIOUSER", 00:15:54.259 "adrfam": "IPv4", 00:15:54.259 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:54.259 "trsvcid": "0" 00:15:54.259 } 00:15:54.259 ], 00:15:54.259 "allow_any_host": true, 00:15:54.259 "hosts": [], 00:15:54.259 "serial_number": "SPDK1", 00:15:54.259 "model_number": "SPDK bdev Controller", 00:15:54.259 "max_namespaces": 32, 00:15:54.259 "min_cntlid": 1, 00:15:54.259 "max_cntlid": 65519, 00:15:54.259 "namespaces": [ 00:15:54.259 { 00:15:54.259 "nsid": 1, 00:15:54.259 "bdev_name": "Malloc1", 00:15:54.259 "name": "Malloc1", 00:15:54.259 "nguid": "A8294F0E97074DD8A9D7462F8F544FF1", 00:15:54.259 "uuid": "a8294f0e-9707-4dd8-a9d7-462f8f544ff1" 00:15:54.259 } 00:15:54.259 ] 00:15:54.259 }, 00:15:54.259 { 00:15:54.259 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:54.259 "subtype": "NVMe", 00:15:54.259 "listen_addresses": [ 00:15:54.259 { 00:15:54.259 "trtype": "VFIOUSER", 00:15:54.259 "adrfam": "IPv4", 00:15:54.259 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:54.259 "trsvcid": "0" 00:15:54.259 } 00:15:54.259 ], 00:15:54.259 "allow_any_host": true, 00:15:54.259 "hosts": [], 00:15:54.259 "serial_number": "SPDK2", 00:15:54.259 "model_number": "SPDK bdev Controller", 00:15:54.259 "max_namespaces": 32, 00:15:54.259 "min_cntlid": 1, 00:15:54.259 "max_cntlid": 65519, 00:15:54.259 "namespaces": [ 00:15:54.259 { 00:15:54.259 "nsid": 1, 00:15:54.259 "bdev_name": "Malloc2", 00:15:54.259 "name": "Malloc2", 00:15:54.259 "nguid": "014BD889B96A47CB9B9D7B78600D0547", 00:15:54.259 "uuid": "014bd889-b96a-47cb-9b9d-7b78600d0547" 00:15:54.259 } 00:15:54.259 ] 00:15:54.259 } 00:15:54.259 ] 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=605898 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:54.518 06:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:54.518 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.518 [2024-07-15 06:44:42.041370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.776 Malloc3 00:15:54.776 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:55.034 [2024-07-15 06:44:42.394980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.034 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.034 Asynchronous Event Request test 00:15:55.034 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.034 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.034 Registering asynchronous event callbacks... 00:15:55.034 Starting namespace attribute notice tests for all controllers... 00:15:55.034 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:55.034 aer_cb - Changed Namespace 00:15:55.034 Cleaning up... 00:15:55.034 [ 00:15:55.034 { 00:15:55.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.034 "subtype": "Discovery", 00:15:55.034 "listen_addresses": [], 00:15:55.034 "allow_any_host": true, 00:15:55.034 "hosts": [] 00:15:55.034 }, 00:15:55.034 { 00:15:55.034 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:55.034 "subtype": "NVMe", 00:15:55.034 "listen_addresses": [ 00:15:55.034 { 00:15:55.034 "trtype": "VFIOUSER", 00:15:55.034 "adrfam": "IPv4", 00:15:55.034 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:55.034 "trsvcid": "0" 00:15:55.034 } 00:15:55.034 ], 00:15:55.034 "allow_any_host": true, 00:15:55.034 "hosts": [], 00:15:55.034 "serial_number": "SPDK1", 00:15:55.034 "model_number": "SPDK bdev Controller", 00:15:55.034 "max_namespaces": 32, 00:15:55.034 "min_cntlid": 1, 00:15:55.034 "max_cntlid": 65519, 00:15:55.034 "namespaces": [ 00:15:55.034 { 00:15:55.034 "nsid": 1, 00:15:55.034 "bdev_name": "Malloc1", 00:15:55.034 "name": "Malloc1", 00:15:55.034 "nguid": "A8294F0E97074DD8A9D7462F8F544FF1", 00:15:55.034 "uuid": "a8294f0e-9707-4dd8-a9d7-462f8f544ff1" 00:15:55.034 }, 00:15:55.034 { 00:15:55.034 "nsid": 2, 00:15:55.034 "bdev_name": "Malloc3", 00:15:55.034 "name": "Malloc3", 00:15:55.034 "nguid": "988821E8FB6A4CB39D2B03F3CC7458FA", 00:15:55.034 "uuid": "988821e8-fb6a-4cb3-9d2b-03f3cc7458fa" 00:15:55.034 } 00:15:55.034 ] 00:15:55.034 }, 00:15:55.034 { 00:15:55.034 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:55.034 "subtype": "NVMe", 00:15:55.034 "listen_addresses": [ 00:15:55.034 { 00:15:55.034 "trtype": "VFIOUSER", 00:15:55.034 "adrfam": "IPv4", 00:15:55.034 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:55.034 "trsvcid": "0" 00:15:55.034 } 00:15:55.034 ], 00:15:55.034 "allow_any_host": true, 00:15:55.034 "hosts": [], 00:15:55.034 "serial_number": "SPDK2", 00:15:55.034 "model_number": "SPDK bdev Controller", 00:15:55.034 "max_namespaces": 32, 00:15:55.034 "min_cntlid": 1, 00:15:55.034 "max_cntlid": 65519, 00:15:55.034 "namespaces": [ 00:15:55.034 { 00:15:55.034 "nsid": 1, 00:15:55.034 "bdev_name": "Malloc2", 00:15:55.034 "name": "Malloc2", 00:15:55.034 "nguid": "014BD889B96A47CB9B9D7B78600D0547", 00:15:55.034 "uuid": "014bd889-b96a-47cb-9b9d-7b78600d0547" 00:15:55.034 } 00:15:55.034 ] 00:15:55.034 } 00:15:55.034 ] 00:15:55.293 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 605898 00:15:55.293 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:55.293 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:55.293 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:55.293 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:55.293 [2024-07-15 06:44:42.667077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:55.293 [2024-07-15 06:44:42.667115] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605919 ] 00:15:55.293 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.293 [2024-07-15 06:44:42.701055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:55.293 [2024-07-15 06:44:42.710228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:55.293 [2024-07-15 06:44:42.710258] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdbb2924000 00:15:55.293 [2024-07-15 06:44:42.711229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.712218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.713223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.714226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.715236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.716247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.717254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.718260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:55.293 [2024-07-15 06:44:42.719266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:55.293 [2024-07-15 06:44:42.719288] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdbb16d6000 00:15:55.293 [2024-07-15 06:44:42.720398] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:55.293 [2024-07-15 06:44:42.739155] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:55.293 [2024-07-15 06:44:42.739203] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:55.293 [2024-07-15 06:44:42.741291] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:55.293 [2024-07-15 06:44:42.741345] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:55.293 [2024-07-15 06:44:42.741427] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:55.293 [2024-07-15 06:44:42.741450] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:55.293 [2024-07-15 06:44:42.741460] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:55.293 [2024-07-15 06:44:42.742297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:55.293 [2024-07-15 06:44:42.742321] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:55.293 [2024-07-15 06:44:42.742335] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:55.293 [2024-07-15 06:44:42.743298] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:55.293 [2024-07-15 06:44:42.743318] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:55.293 [2024-07-15 06:44:42.743336] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:55.293 [2024-07-15 06:44:42.744301] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:55.293 [2024-07-15 06:44:42.744321] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:55.293 [2024-07-15 06:44:42.745309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:55.293 [2024-07-15 06:44:42.745328] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:55.294 [2024-07-15 06:44:42.745337] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:55.294 [2024-07-15 06:44:42.745348] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:55.294 [2024-07-15 06:44:42.745457] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:55.294 [2024-07-15 06:44:42.745465] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:55.294 [2024-07-15 06:44:42.745473] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:55.294 [2024-07-15 06:44:42.746322] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:55.294 [2024-07-15 06:44:42.747324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:55.294 [2024-07-15 06:44:42.748331] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:55.294 [2024-07-15 06:44:42.749326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.294 [2024-07-15 06:44:42.749406] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:55.294 [2024-07-15 06:44:42.750341] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:55.294 [2024-07-15 06:44:42.750360] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:55.294 [2024-07-15 06:44:42.750369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.750392] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:55.294 [2024-07-15 06:44:42.750407] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.750428] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:55.294 [2024-07-15 06:44:42.750437] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:55.294 [2024-07-15 06:44:42.750454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.756901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.756928] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:55.294 [2024-07-15 06:44:42.756941] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:55.294 [2024-07-15 06:44:42.756949] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:55.294 [2024-07-15 06:44:42.756966] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:55.294 [2024-07-15 06:44:42.756974] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:55.294 [2024-07-15 06:44:42.756981] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:55.294 [2024-07-15 06:44:42.756989] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.757002] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.757019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.764902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.764936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.294 [2024-07-15 06:44:42.764950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.294 [2024-07-15 06:44:42.764961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.294 [2024-07-15 06:44:42.764973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.294 [2024-07-15 06:44:42.764981] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.764997] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.765011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.772904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.772929] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:55.294 [2024-07-15 06:44:42.772938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.772950] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.772964] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.772979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.780984] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.781000] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.781018] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:55.294 [2024-07-15 06:44:42.781027] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:55.294 [2024-07-15 06:44:42.781037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.788909] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:55.294 [2024-07-15 06:44:42.788927] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.788942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.788954] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:55.294 [2024-07-15 06:44:42.788962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:55.294 [2024-07-15 06:44:42.788972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.796903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.796940] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.796956] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.796969] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:55.294 [2024-07-15 06:44:42.796977] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:55.294 [2024-07-15 06:44:42.796986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.804889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.804910] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804927] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804940] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804951] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804959] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804967] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:55.294 [2024-07-15 06:44:42.804974] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:55.294 [2024-07-15 06:44:42.804982] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:55.294 [2024-07-15 06:44:42.805016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.812885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.812913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.820891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.820915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.828889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.828913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:55.294 [2024-07-15 06:44:42.836885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:55.294 [2024-07-15 06:44:42.836912] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:55.294 [2024-07-15 06:44:42.836922] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:55.294 [2024-07-15 06:44:42.836938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:55.294 [2024-07-15 06:44:42.836944] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:55.294 [2024-07-15 06:44:42.836953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:55.294 [2024-07-15 06:44:42.836965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:55.294 [2024-07-15 06:44:42.836972] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:55.295 [2024-07-15 06:44:42.836981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:55.295 [2024-07-15 06:44:42.836992] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:55.295 [2024-07-15 06:44:42.836999] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:55.295 [2024-07-15 06:44:42.837008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:55.295 [2024-07-15 06:44:42.837020] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:55.295 [2024-07-15 06:44:42.837027] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:55.295 [2024-07-15 06:44:42.837036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:55.295 [2024-07-15 06:44:42.844888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:55.295 [2024-07-15 06:44:42.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:55.295 [2024-07-15 06:44:42.844931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:55.295 [2024-07-15 06:44:42.844946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:55.295 ===================================================== 00:15:55.295 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.295 ===================================================== 00:15:55.295 Controller Capabilities/Features 00:15:55.295 ================================ 00:15:55.295 Vendor ID: 4e58 00:15:55.295 Subsystem Vendor ID: 4e58 00:15:55.295 Serial Number: SPDK2 00:15:55.295 Model Number: SPDK bdev Controller 00:15:55.295 Firmware Version: 24.05.1 00:15:55.295 Recommended Arb Burst: 6 00:15:55.295 IEEE OUI Identifier: 8d 6b 50 00:15:55.295 Multi-path I/O 00:15:55.295 May have multiple subsystem ports: Yes 00:15:55.295 May have multiple controllers: Yes 00:15:55.295 Associated with SR-IOV VF: No 00:15:55.295 Max Data Transfer Size: 131072 00:15:55.295 Max Number of Namespaces: 32 00:15:55.295 Max Number of I/O Queues: 127 00:15:55.295 NVMe Specification Version (VS): 1.3 00:15:55.295 NVMe Specification Version (Identify): 1.3 00:15:55.295 Maximum Queue Entries: 256 00:15:55.295 Contiguous Queues Required: Yes 00:15:55.295 Arbitration Mechanisms Supported 00:15:55.295 Weighted Round Robin: Not Supported 00:15:55.295 Vendor Specific: Not Supported 00:15:55.295 Reset Timeout: 15000 ms 00:15:55.295 Doorbell Stride: 4 bytes 00:15:55.295 NVM Subsystem Reset: Not Supported 00:15:55.295 Command Sets Supported 00:15:55.295 NVM Command Set: Supported 00:15:55.295 Boot Partition: Not Supported 00:15:55.295 Memory Page Size Minimum: 4096 bytes 00:15:55.295 Memory Page Size Maximum: 4096 bytes 00:15:55.295 Persistent Memory Region: Not Supported 00:15:55.295 Optional Asynchronous Events Supported 00:15:55.295 Namespace Attribute Notices: Supported 00:15:55.295 Firmware Activation Notices: Not Supported 00:15:55.295 ANA Change Notices: Not Supported 00:15:55.295 PLE Aggregate Log Change Notices: Not Supported 00:15:55.295 LBA Status Info Alert Notices: Not Supported 00:15:55.295 EGE Aggregate Log Change Notices: Not Supported 00:15:55.295 Normal NVM Subsystem Shutdown event: Not Supported 00:15:55.295 Zone Descriptor Change Notices: Not Supported 00:15:55.295 Discovery Log Change Notices: Not Supported 00:15:55.295 Controller Attributes 00:15:55.295 128-bit Host Identifier: Supported 00:15:55.295 Non-Operational Permissive Mode: Not Supported 00:15:55.295 NVM Sets: Not Supported 00:15:55.295 Read Recovery Levels: Not Supported 00:15:55.295 Endurance Groups: Not Supported 00:15:55.295 Predictable Latency Mode: Not Supported 00:15:55.295 Traffic Based Keep ALive: Not Supported 00:15:55.295 Namespace Granularity: Not Supported 00:15:55.295 SQ Associations: Not Supported 00:15:55.295 UUID List: Not Supported 00:15:55.295 Multi-Domain Subsystem: Not Supported 00:15:55.295 Fixed Capacity Management: Not Supported 00:15:55.295 Variable Capacity Management: Not Supported 00:15:55.295 Delete Endurance Group: Not Supported 00:15:55.295 Delete NVM Set: Not Supported 00:15:55.295 Extended LBA Formats Supported: Not Supported 00:15:55.295 Flexible Data Placement Supported: Not Supported 00:15:55.295 00:15:55.295 Controller Memory Buffer Support 00:15:55.295 ================================ 00:15:55.295 Supported: No 00:15:55.295 00:15:55.295 Persistent Memory Region Support 00:15:55.295 ================================ 00:15:55.295 Supported: No 00:15:55.295 00:15:55.295 Admin Command Set Attributes 00:15:55.295 ============================ 00:15:55.295 Security Send/Receive: Not Supported 00:15:55.295 Format NVM: Not Supported 00:15:55.295 Firmware Activate/Download: Not Supported 00:15:55.295 Namespace Management: Not Supported 00:15:55.295 Device Self-Test: Not Supported 00:15:55.295 Directives: Not Supported 00:15:55.295 NVMe-MI: Not Supported 00:15:55.295 Virtualization Management: Not Supported 00:15:55.295 Doorbell Buffer Config: Not Supported 00:15:55.295 Get LBA Status Capability: Not Supported 00:15:55.295 Command & Feature Lockdown Capability: Not Supported 00:15:55.295 Abort Command Limit: 4 00:15:55.295 Async Event Request Limit: 4 00:15:55.295 Number of Firmware Slots: N/A 00:15:55.295 Firmware Slot 1 Read-Only: N/A 00:15:55.295 Firmware Activation Without Reset: N/A 00:15:55.295 Multiple Update Detection Support: N/A 00:15:55.295 Firmware Update Granularity: No Information Provided 00:15:55.295 Per-Namespace SMART Log: No 00:15:55.295 Asymmetric Namespace Access Log Page: Not Supported 00:15:55.295 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:55.295 Command Effects Log Page: Supported 00:15:55.295 Get Log Page Extended Data: Supported 00:15:55.295 Telemetry Log Pages: Not Supported 00:15:55.295 Persistent Event Log Pages: Not Supported 00:15:55.295 Supported Log Pages Log Page: May Support 00:15:55.295 Commands Supported & Effects Log Page: Not Supported 00:15:55.295 Feature Identifiers & Effects Log Page:May Support 00:15:55.295 NVMe-MI Commands & Effects Log Page: May Support 00:15:55.295 Data Area 4 for Telemetry Log: Not Supported 00:15:55.295 Error Log Page Entries Supported: 128 00:15:55.295 Keep Alive: Supported 00:15:55.295 Keep Alive Granularity: 10000 ms 00:15:55.295 00:15:55.295 NVM Command Set Attributes 00:15:55.295 ========================== 00:15:55.295 Submission Queue Entry Size 00:15:55.295 Max: 64 00:15:55.295 Min: 64 00:15:55.295 Completion Queue Entry Size 00:15:55.295 Max: 16 00:15:55.295 Min: 16 00:15:55.295 Number of Namespaces: 32 00:15:55.295 Compare Command: Supported 00:15:55.295 Write Uncorrectable Command: Not Supported 00:15:55.295 Dataset Management Command: Supported 00:15:55.295 Write Zeroes Command: Supported 00:15:55.295 Set Features Save Field: Not Supported 00:15:55.295 Reservations: Not Supported 00:15:55.295 Timestamp: Not Supported 00:15:55.295 Copy: Supported 00:15:55.295 Volatile Write Cache: Present 00:15:55.295 Atomic Write Unit (Normal): 1 00:15:55.295 Atomic Write Unit (PFail): 1 00:15:55.295 Atomic Compare & Write Unit: 1 00:15:55.295 Fused Compare & Write: Supported 00:15:55.295 Scatter-Gather List 00:15:55.295 SGL Command Set: Supported (Dword aligned) 00:15:55.295 SGL Keyed: Not Supported 00:15:55.295 SGL Bit Bucket Descriptor: Not Supported 00:15:55.295 SGL Metadata Pointer: Not Supported 00:15:55.295 Oversized SGL: Not Supported 00:15:55.295 SGL Metadata Address: Not Supported 00:15:55.295 SGL Offset: Not Supported 00:15:55.295 Transport SGL Data Block: Not Supported 00:15:55.295 Replay Protected Memory Block: Not Supported 00:15:55.295 00:15:55.295 Firmware Slot Information 00:15:55.295 ========================= 00:15:55.295 Active slot: 1 00:15:55.295 Slot 1 Firmware Revision: 24.05.1 00:15:55.295 00:15:55.295 00:15:55.295 Commands Supported and Effects 00:15:55.295 ============================== 00:15:55.295 Admin Commands 00:15:55.295 -------------- 00:15:55.295 Get Log Page (02h): Supported 00:15:55.295 Identify (06h): Supported 00:15:55.295 Abort (08h): Supported 00:15:55.295 Set Features (09h): Supported 00:15:55.295 Get Features (0Ah): Supported 00:15:55.295 Asynchronous Event Request (0Ch): Supported 00:15:55.295 Keep Alive (18h): Supported 00:15:55.295 I/O Commands 00:15:55.295 ------------ 00:15:55.295 Flush (00h): Supported LBA-Change 00:15:55.295 Write (01h): Supported LBA-Change 00:15:55.295 Read (02h): Supported 00:15:55.295 Compare (05h): Supported 00:15:55.295 Write Zeroes (08h): Supported LBA-Change 00:15:55.295 Dataset Management (09h): Supported LBA-Change 00:15:55.295 Copy (19h): Supported LBA-Change 00:15:55.295 Unknown (79h): Supported LBA-Change 00:15:55.295 Unknown (7Ah): Supported 00:15:55.295 00:15:55.295 Error Log 00:15:55.295 ========= 00:15:55.295 00:15:55.295 Arbitration 00:15:55.295 =========== 00:15:55.295 Arbitration Burst: 1 00:15:55.295 00:15:55.295 Power Management 00:15:55.295 ================ 00:15:55.295 Number of Power States: 1 00:15:55.295 Current Power State: Power State #0 00:15:55.295 Power State #0: 00:15:55.295 Max Power: 0.00 W 00:15:55.295 Non-Operational State: Operational 00:15:55.295 Entry Latency: Not Reported 00:15:55.296 Exit Latency: Not Reported 00:15:55.296 Relative Read Throughput: 0 00:15:55.296 Relative Read Latency: 0 00:15:55.296 Relative Write Throughput: 0 00:15:55.296 Relative Write Latency: 0 00:15:55.296 Idle Power: Not Reported 00:15:55.296 Active Power: Not Reported 00:15:55.296 Non-Operational Permissive Mode: Not Supported 00:15:55.296 00:15:55.296 Health Information 00:15:55.296 ================== 00:15:55.296 Critical Warnings: 00:15:55.296 Available Spare Space: OK 00:15:55.296 Temperature: OK 00:15:55.296 Device Reliability: OK 00:15:55.296 Read Only: No 00:15:55.296 Volatile Memory Backup: OK 00:15:55.296 Current Temperature: 0 Kelvin[2024-07-15 06:44:42.845070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:55.296 [2024-07-15 06:44:42.852888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:55.296 [2024-07-15 06:44:42.852939] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:55.296 [2024-07-15 06:44:42.852958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.296 [2024-07-15 06:44:42.852968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.296 [2024-07-15 06:44:42.852978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.296 [2024-07-15 06:44:42.852987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.296 [2024-07-15 06:44:42.853078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:55.296 [2024-07-15 06:44:42.853099] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:55.296 [2024-07-15 06:44:42.854077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.296 [2024-07-15 06:44:42.854162] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:55.296 [2024-07-15 06:44:42.854193] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:55.296 [2024-07-15 06:44:42.855086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:55.296 [2024-07-15 06:44:42.855111] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:55.296 [2024-07-15 06:44:42.855178] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:55.296 [2024-07-15 06:44:42.856376] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:55.296 (-273 Celsius) 00:15:55.296 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:55.296 Available Spare: 0% 00:15:55.296 Available Spare Threshold: 0% 00:15:55.296 Life Percentage Used: 0% 00:15:55.296 Data Units Read: 0 00:15:55.296 Data Units Written: 0 00:15:55.296 Host Read Commands: 0 00:15:55.296 Host Write Commands: 0 00:15:55.296 Controller Busy Time: 0 minutes 00:15:55.296 Power Cycles: 0 00:15:55.296 Power On Hours: 0 hours 00:15:55.296 Unsafe Shutdowns: 0 00:15:55.296 Unrecoverable Media Errors: 0 00:15:55.296 Lifetime Error Log Entries: 0 00:15:55.296 Warning Temperature Time: 0 minutes 00:15:55.296 Critical Temperature Time: 0 minutes 00:15:55.296 00:15:55.296 Number of Queues 00:15:55.296 ================ 00:15:55.296 Number of I/O Submission Queues: 127 00:15:55.296 Number of I/O Completion Queues: 127 00:15:55.296 00:15:55.296 Active Namespaces 00:15:55.296 ================= 00:15:55.296 Namespace ID:1 00:15:55.296 Error Recovery Timeout: Unlimited 00:15:55.296 Command Set Identifier: NVM (00h) 00:15:55.296 Deallocate: Supported 00:15:55.296 Deallocated/Unwritten Error: Not Supported 00:15:55.296 Deallocated Read Value: Unknown 00:15:55.296 Deallocate in Write Zeroes: Not Supported 00:15:55.296 Deallocated Guard Field: 0xFFFF 00:15:55.296 Flush: Supported 00:15:55.296 Reservation: Supported 00:15:55.296 Namespace Sharing Capabilities: Multiple Controllers 00:15:55.296 Size (in LBAs): 131072 (0GiB) 00:15:55.296 Capacity (in LBAs): 131072 (0GiB) 00:15:55.296 Utilization (in LBAs): 131072 (0GiB) 00:15:55.296 NGUID: 014BD889B96A47CB9B9D7B78600D0547 00:15:55.296 UUID: 014bd889-b96a-47cb-9b9d-7b78600d0547 00:15:55.296 Thin Provisioning: Not Supported 00:15:55.296 Per-NS Atomic Units: Yes 00:15:55.296 Atomic Boundary Size (Normal): 0 00:15:55.296 Atomic Boundary Size (PFail): 0 00:15:55.296 Atomic Boundary Offset: 0 00:15:55.296 Maximum Single Source Range Length: 65535 00:15:55.296 Maximum Copy Length: 65535 00:15:55.296 Maximum Source Range Count: 1 00:15:55.296 NGUID/EUI64 Never Reused: No 00:15:55.296 Namespace Write Protected: No 00:15:55.296 Number of LBA Formats: 1 00:15:55.296 Current LBA Format: LBA Format #00 00:15:55.296 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:55.296 00:15:55.296 06:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:55.556 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.556 [2024-07-15 06:44:43.086710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.845 Initializing NVMe Controllers 00:16:00.845 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.845 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:00.845 Initialization complete. Launching workers. 00:16:00.845 ======================================================== 00:16:00.845 Latency(us) 00:16:00.845 Device Information : IOPS MiB/s Average min max 00:16:00.845 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35805.42 139.86 3574.35 1155.98 8029.30 00:16:00.845 ======================================================== 00:16:00.845 Total : 35805.42 139.86 3574.35 1155.98 8029.30 00:16:00.845 00:16:00.845 [2024-07-15 06:44:48.189217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.845 06:44:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:00.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.845 [2024-07-15 06:44:48.423896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.145 Initializing NVMe Controllers 00:16:06.145 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:06.145 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:06.145 Initialization complete. Launching workers. 00:16:06.145 ======================================================== 00:16:06.145 Latency(us) 00:16:06.145 Device Information : IOPS MiB/s Average min max 00:16:06.145 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31958.48 124.84 4004.53 1218.10 8282.31 00:16:06.145 ======================================================== 00:16:06.145 Total : 31958.48 124.84 4004.53 1218.10 8282.31 00:16:06.145 00:16:06.145 [2024-07-15 06:44:53.446127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.145 06:44:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:06.145 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.145 [2024-07-15 06:44:53.657653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:11.423 [2024-07-15 06:44:58.805017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:11.423 Initializing NVMe Controllers 00:16:11.423 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:11.423 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:11.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:11.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:11.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:11.423 Initialization complete. Launching workers. 00:16:11.423 Starting thread on core 2 00:16:11.423 Starting thread on core 3 00:16:11.423 Starting thread on core 1 00:16:11.423 06:44:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:11.423 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.681 [2024-07-15 06:44:59.099365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.965 [2024-07-15 06:45:02.170764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.965 Initializing NVMe Controllers 00:16:14.965 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.965 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:14.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:14.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:14.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:14.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:14.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:14.965 Initialization complete. Launching workers. 00:16:14.965 Starting thread on core 1 with urgent priority queue 00:16:14.965 Starting thread on core 2 with urgent priority queue 00:16:14.965 Starting thread on core 3 with urgent priority queue 00:16:14.965 Starting thread on core 0 with urgent priority queue 00:16:14.965 SPDK bdev Controller (SPDK2 ) core 0: 5591.67 IO/s 17.88 secs/100000 ios 00:16:14.965 SPDK bdev Controller (SPDK2 ) core 1: 5593.67 IO/s 17.88 secs/100000 ios 00:16:14.965 SPDK bdev Controller (SPDK2 ) core 2: 5663.67 IO/s 17.66 secs/100000 ios 00:16:14.965 SPDK bdev Controller (SPDK2 ) core 3: 5608.67 IO/s 17.83 secs/100000 ios 00:16:14.965 ======================================================== 00:16:14.965 00:16:14.965 06:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:14.965 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.965 [2024-07-15 06:45:02.467991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.965 Initializing NVMe Controllers 00:16:14.965 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.965 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.965 Namespace ID: 1 size: 0GB 00:16:14.965 Initialization complete. 00:16:14.965 INFO: using host memory buffer for IO 00:16:14.965 Hello world! 00:16:14.965 [2024-07-15 06:45:02.480074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.965 06:45:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:15.224 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.224 [2024-07-15 06:45:02.780322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.615 Initializing NVMe Controllers 00:16:16.615 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.615 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.615 Initialization complete. Launching workers. 00:16:16.615 submit (in ns) avg, min, max = 6261.1, 3484.4, 4016872.2 00:16:16.615 complete (in ns) avg, min, max = 26009.7, 2036.7, 5995858.9 00:16:16.615 00:16:16.615 Submit histogram 00:16:16.615 ================ 00:16:16.615 Range in us Cumulative Count 00:16:16.615 3.484 - 3.508: 0.2611% ( 35) 00:16:16.615 3.508 - 3.532: 1.2609% ( 134) 00:16:16.615 3.532 - 3.556: 4.2826% ( 405) 00:16:16.615 3.556 - 3.579: 8.6921% ( 591) 00:16:16.615 3.579 - 3.603: 18.8913% ( 1367) 00:16:16.615 3.603 - 3.627: 29.2323% ( 1386) 00:16:16.615 3.627 - 3.650: 39.5583% ( 1384) 00:16:16.615 3.650 - 3.674: 45.3406% ( 775) 00:16:16.615 3.674 - 3.698: 51.9884% ( 891) 00:16:16.615 3.698 - 3.721: 57.3827% ( 723) 00:16:16.615 3.721 - 3.745: 61.5907% ( 564) 00:16:16.615 3.745 - 3.769: 64.9481% ( 450) 00:16:16.615 3.769 - 3.793: 67.7460% ( 375) 00:16:16.615 3.793 - 3.816: 71.1483% ( 456) 00:16:16.615 3.816 - 3.840: 74.5579% ( 457) 00:16:16.615 3.840 - 3.864: 78.9376% ( 587) 00:16:16.615 3.864 - 3.887: 82.6009% ( 491) 00:16:16.615 3.887 - 3.911: 85.2869% ( 360) 00:16:16.615 3.911 - 3.935: 87.2864% ( 268) 00:16:16.615 3.935 - 3.959: 89.2785% ( 267) 00:16:16.615 3.959 - 3.982: 90.8603% ( 212) 00:16:16.615 3.982 - 4.006: 92.0615% ( 161) 00:16:16.615 4.006 - 4.030: 93.0836% ( 137) 00:16:16.615 4.030 - 4.053: 94.0013% ( 123) 00:16:16.615 4.053 - 4.077: 94.8594% ( 115) 00:16:16.615 4.077 - 4.101: 95.5980% ( 99) 00:16:16.615 4.101 - 4.124: 96.1053% ( 68) 00:16:16.615 4.124 - 4.148: 96.4336% ( 44) 00:16:16.615 4.148 - 4.172: 96.7470% ( 42) 00:16:16.615 4.172 - 4.196: 96.8888% ( 19) 00:16:16.615 4.196 - 4.219: 97.0156% ( 17) 00:16:16.615 4.219 - 4.243: 97.2021% ( 25) 00:16:16.615 4.243 - 4.267: 97.3588% ( 21) 00:16:16.615 4.267 - 4.290: 97.4483% ( 12) 00:16:16.615 4.290 - 4.314: 97.5379% ( 12) 00:16:16.615 4.314 - 4.338: 97.6050% ( 9) 00:16:16.615 4.338 - 4.361: 97.6647% ( 8) 00:16:16.615 4.361 - 4.385: 97.7617% ( 13) 00:16:16.615 4.385 - 4.409: 97.8065% ( 6) 00:16:16.615 4.409 - 4.433: 97.8139% ( 1) 00:16:16.615 4.433 - 4.456: 97.8363% ( 3) 00:16:16.615 4.456 - 4.480: 97.8512% ( 2) 00:16:16.615 4.480 - 4.504: 97.8587% ( 1) 00:16:16.615 4.504 - 4.527: 97.8736% ( 2) 00:16:16.615 4.622 - 4.646: 97.8811% ( 1) 00:16:16.615 4.646 - 4.670: 97.8960% ( 2) 00:16:16.615 4.670 - 4.693: 97.9035% ( 1) 00:16:16.615 4.717 - 4.741: 97.9184% ( 2) 00:16:16.615 4.741 - 4.764: 97.9258% ( 1) 00:16:16.615 4.764 - 4.788: 97.9482% ( 3) 00:16:16.615 4.812 - 4.836: 97.9781% ( 4) 00:16:16.615 4.836 - 4.859: 98.0228% ( 6) 00:16:16.615 4.859 - 4.883: 98.0751% ( 7) 00:16:16.615 4.883 - 4.907: 98.1571% ( 11) 00:16:16.615 4.907 - 4.930: 98.2243% ( 9) 00:16:16.615 4.930 - 4.954: 98.2541% ( 4) 00:16:16.615 4.954 - 4.978: 98.2989% ( 6) 00:16:16.615 4.978 - 5.001: 98.3437% ( 6) 00:16:16.615 5.001 - 5.025: 98.4183% ( 10) 00:16:16.615 5.025 - 5.049: 98.4556% ( 5) 00:16:16.615 5.049 - 5.073: 98.4854% ( 4) 00:16:16.615 5.073 - 5.096: 98.5078% ( 3) 00:16:16.615 5.096 - 5.120: 98.5302% ( 3) 00:16:16.615 5.120 - 5.144: 98.5451% ( 2) 00:16:16.615 5.144 - 5.167: 98.5749% ( 4) 00:16:16.615 5.167 - 5.191: 98.6197% ( 6) 00:16:16.615 5.191 - 5.215: 98.6496% ( 4) 00:16:16.615 5.215 - 5.239: 98.6570% ( 1) 00:16:16.615 5.239 - 5.262: 98.6794% ( 3) 00:16:16.615 5.262 - 5.286: 98.6943% ( 2) 00:16:16.615 5.286 - 5.310: 98.7018% ( 1) 00:16:16.615 5.333 - 5.357: 98.7167% ( 2) 00:16:16.615 5.357 - 5.381: 98.7316% ( 2) 00:16:16.615 5.381 - 5.404: 98.7391% ( 1) 00:16:16.615 5.404 - 5.428: 98.7540% ( 2) 00:16:16.615 5.618 - 5.641: 98.7615% ( 1) 00:16:16.615 5.713 - 5.736: 98.7689% ( 1) 00:16:16.615 6.542 - 6.590: 98.7764% ( 1) 00:16:16.615 6.590 - 6.637: 98.7839% ( 1) 00:16:16.615 7.111 - 7.159: 98.7913% ( 1) 00:16:16.615 7.159 - 7.206: 98.7988% ( 1) 00:16:16.615 7.206 - 7.253: 98.8062% ( 1) 00:16:16.615 7.253 - 7.301: 98.8137% ( 1) 00:16:16.615 7.443 - 7.490: 98.8212% ( 1) 00:16:16.615 7.538 - 7.585: 98.8361% ( 2) 00:16:16.615 7.727 - 7.775: 98.8435% ( 1) 00:16:16.615 7.775 - 7.822: 98.8585% ( 2) 00:16:16.615 7.822 - 7.870: 98.8659% ( 1) 00:16:16.615 7.870 - 7.917: 98.8808% ( 2) 00:16:16.615 7.917 - 7.964: 98.9032% ( 3) 00:16:16.615 7.964 - 8.012: 98.9107% ( 1) 00:16:16.615 8.059 - 8.107: 98.9182% ( 1) 00:16:16.615 8.107 - 8.154: 98.9331% ( 2) 00:16:16.615 8.154 - 8.201: 98.9405% ( 1) 00:16:16.615 8.201 - 8.249: 98.9629% ( 3) 00:16:16.615 8.249 - 8.296: 98.9704% ( 1) 00:16:16.615 8.439 - 8.486: 98.9928% ( 3) 00:16:16.615 8.581 - 8.628: 99.0002% ( 1) 00:16:16.615 8.723 - 8.770: 99.0077% ( 1) 00:16:16.615 8.818 - 8.865: 99.0151% ( 1) 00:16:16.615 8.865 - 8.913: 99.0226% ( 1) 00:16:16.615 8.913 - 8.960: 99.0375% ( 2) 00:16:16.615 8.960 - 9.007: 99.0450% ( 1) 00:16:16.615 9.007 - 9.055: 99.0525% ( 1) 00:16:16.615 9.055 - 9.102: 99.0599% ( 1) 00:16:16.615 9.102 - 9.150: 99.0674% ( 1) 00:16:16.615 9.150 - 9.197: 99.0748% ( 1) 00:16:16.615 9.197 - 9.244: 99.0823% ( 1) 00:16:16.615 9.529 - 9.576: 99.0898% ( 1) 00:16:16.615 9.671 - 9.719: 99.0972% ( 1) 00:16:16.615 9.813 - 9.861: 99.1047% ( 1) 00:16:16.615 9.908 - 9.956: 99.1121% ( 1) 00:16:16.615 10.050 - 10.098: 99.1196% ( 1) 00:16:16.615 10.809 - 10.856: 99.1271% ( 1) 00:16:16.615 11.093 - 11.141: 99.1345% ( 1) 00:16:16.615 11.425 - 11.473: 99.1420% ( 1) 00:16:16.615 11.710 - 11.757: 99.1494% ( 1) 00:16:16.615 12.089 - 12.136: 99.1569% ( 1) 00:16:16.615 12.231 - 12.326: 99.1644% ( 1) 00:16:16.615 12.326 - 12.421: 99.1718% ( 1) 00:16:16.615 12.421 - 12.516: 99.1793% ( 1) 00:16:16.615 12.516 - 12.610: 99.1867% ( 1) 00:16:16.615 12.990 - 13.084: 99.1942% ( 1) 00:16:16.615 13.464 - 13.559: 99.2017% ( 1) 00:16:16.615 13.559 - 13.653: 99.2091% ( 1) 00:16:16.615 13.938 - 14.033: 99.2166% ( 1) 00:16:16.615 14.507 - 14.601: 99.2241% ( 1) 00:16:16.615 14.886 - 14.981: 99.2315% ( 1) 00:16:16.615 15.834 - 15.929: 99.2390% ( 1) 00:16:16.615 16.972 - 17.067: 99.2464% ( 1) 00:16:16.615 17.067 - 17.161: 99.2539% ( 1) 00:16:16.615 17.161 - 17.256: 99.2614% ( 1) 00:16:16.615 17.351 - 17.446: 99.2763% ( 2) 00:16:16.615 17.541 - 17.636: 99.2837% ( 1) 00:16:16.615 17.636 - 17.730: 99.3285% ( 6) 00:16:16.615 17.730 - 17.825: 99.3509% ( 3) 00:16:16.615 17.825 - 17.920: 99.4180% ( 9) 00:16:16.615 17.920 - 18.015: 99.4553% ( 5) 00:16:16.615 18.015 - 18.110: 99.4927% ( 5) 00:16:16.615 18.110 - 18.204: 99.5225% ( 4) 00:16:16.615 18.204 - 18.299: 99.6493% ( 17) 00:16:16.615 18.299 - 18.394: 99.6941% ( 6) 00:16:16.615 18.394 - 18.489: 99.7389% ( 6) 00:16:16.615 18.489 - 18.584: 99.7836% ( 6) 00:16:16.615 18.584 - 18.679: 99.8284% ( 6) 00:16:16.615 18.679 - 18.773: 99.8806% ( 7) 00:16:16.615 18.868 - 18.963: 99.9030% ( 3) 00:16:16.615 18.963 - 19.058: 99.9105% ( 1) 00:16:16.615 19.342 - 19.437: 99.9179% ( 1) 00:16:16.615 20.006 - 20.101: 99.9329% ( 2) 00:16:16.615 23.988 - 24.083: 99.9403% ( 1) 00:16:16.615 3980.705 - 4004.978: 99.9851% ( 6) 00:16:16.615 4004.978 - 4029.250: 100.0000% ( 2) 00:16:16.615 00:16:16.615 Complete histogram 00:16:16.615 ================== 00:16:16.615 Range in us Cumulative Count 00:16:16.615 2.027 - 2.039: 0.0075% ( 1) 00:16:16.615 2.039 - 2.050: 14.3998% ( 1929) 00:16:16.615 2.050 - 2.062: 42.0577% ( 3707) 00:16:16.615 2.062 - 2.074: 44.9228% ( 384) 00:16:16.615 2.074 - 2.086: 53.1672% ( 1105) 00:16:16.615 2.086 - 2.098: 60.8147% ( 1025) 00:16:16.615 2.098 - 2.110: 62.4636% ( 221) 00:16:16.615 2.110 - 2.121: 73.7745% ( 1516) 00:16:16.616 2.121 - 2.133: 77.5050% ( 500) 00:16:16.616 2.133 - 2.145: 78.6391% ( 152) 00:16:16.616 2.145 - 2.157: 82.0637% ( 459) 00:16:16.616 2.157 - 2.169: 83.7126% ( 221) 00:16:16.616 2.169 - 2.181: 84.2871% ( 77) 00:16:16.616 2.181 - 2.193: 87.6520% ( 451) 00:16:16.616 2.193 - 2.204: 89.4874% ( 246) 00:16:16.616 2.204 - 2.216: 91.5616% ( 278) 00:16:16.616 2.216 - 2.228: 93.0762% ( 203) 00:16:16.616 2.228 - 2.240: 93.7701% ( 93) 00:16:16.616 2.240 - 2.252: 94.0088% ( 32) 00:16:16.616 2.252 - 2.264: 94.3445% ( 45) 00:16:16.616 2.264 - 2.276: 94.6952% ( 47) 00:16:16.616 2.276 - 2.287: 95.2921% ( 80) 00:16:16.616 2.287 - 2.299: 95.5458% ( 34) 00:16:16.616 2.299 - 2.311: 95.7025% ( 21) 00:16:16.616 2.311 - 2.323: 95.8069% ( 14) 00:16:16.616 2.323 - 2.335: 96.0009% ( 26) 00:16:16.616 2.335 - 2.347: 96.1874% ( 25) 00:16:16.616 2.347 - 2.359: 96.6425% ( 61) 00:16:16.616 2.359 - 2.370: 97.0081% ( 49) 00:16:16.616 2.370 - 2.382: 97.2842% ( 37) 00:16:16.616 2.382 - 2.394: 97.4633% ( 24) 00:16:16.616 2.394 - 2.406: 97.6050% ( 19) 00:16:16.616 2.406 - 2.418: 97.6945% ( 12) 00:16:16.616 2.418 - 2.430: 97.7841% ( 12) 00:16:16.616 2.430 - 2.441: 97.9258% ( 19) 00:16:16.616 2.441 - 2.453: 97.9855% ( 8) 00:16:16.616 2.453 - 2.465: 98.0974% ( 15) 00:16:16.616 2.465 - 2.477: 98.1571% ( 8) 00:16:16.616 2.477 - 2.489: 98.2019% ( 6) 00:16:16.616 2.489 - 2.501: 98.3213% ( 16) 00:16:16.616 2.501 - 2.513: 98.3735% ( 7) 00:16:16.616 2.513 - 2.524: 98.4108% ( 5) 00:16:16.616 2.524 - 2.536: 98.4481% ( 5) 00:16:16.616 2.536 - 2.548: 98.4705% ( 3) 00:16:16.616 2.548 - 2.560: 98.4929% ( 3) 00:16:16.616 2.560 - 2.572: 98.5003% ( 1) 00:16:16.616 2.572 - 2.584: 98.5078% ( 1) 00:16:16.616 2.596 - 2.607: 98.5153% ( 1) 00:16:16.616 2.667 - 2.679: 98.5227% ( 1) 00:16:16.616 2.679 - 2.690: 98.5302% ( 1) 00:16:16.616 2.690 - 2.702: 98.5376% ( 1) 00:16:16.616 2.821 - 2.833: 98.5451% ( 1) 00:16:16.616 2.833 - 2.844: 98.5526% ( 1) 00:16:16.616 2.916 - 2.927: 98.5600% ( 1) 00:16:16.616 3.437 - 3.461: 98.5749% ( 2) 00:16:16.616 3.461 - 3.484: 98.5973% ( 3) 00:16:16.616 3.484 - 3.508: 98.6048% ( 1) 00:16:16.616 3.532 - 3.556: 98.6197% ( 2) 00:16:16.616 3.556 - 3.579: 98.6346% ( 2) 00:16:16.616 3.579 - 3.603: 98.6496% ( 2) 00:16:16.616 3.603 - 3.627: 98.6570% ( 1) 00:16:16.616 3.650 - 3.674: 98.6645% ( 1) 00:16:16.616 3.674 - 3.698: 98.6719% ( 1) 00:16:16.616 3.698 - 3.721: 98.6869% ( 2) 00:16:16.616 3.721 - 3.745: 98.6943% ( 1) 00:16:16.616 3.793 - 3.816: 98.7242% ( 4) 00:16:16.616 3.816 - 3.840: 98.7316% ( 1) 00:16:16.616 3.911 - 3.935: 98.7465% ( 2) 00:16:16.616 3.935 - 3.959: 98.7615% ( 2) 00:16:16.616 3.982 - 4.006: 98.7689% ( 1) 00:16:16.616 4.504 - 4.527: 98.7764% ( 1) 00:16:16.616 5.073 - 5.096: 98.7839% ( 1) 00:16:16.616 5.902 - 5.926: 98.7913% ( 1) 00:16:16.616 5.973 - 5.997: 98.7988% ( 1) 00:16:16.616 5.997 - 6.021: 98.8062% ( 1) 00:16:16.616 6.044 - 6.068: 98.8137% ( 1) 00:16:16.616 6.116 - 6.163: 98.8212% ( 1) 00:16:16.616 6.258 - 6.305: 98.8361% ( 2) 00:16:16.616 6.353 - 6.400: 98.8510% ( 2) 00:16:16.616 6.732 - 6.779: 98.8585% ( 1) 00:16:16.616 6.874 - 6.921: 98.8659% ( 1) 00:16:16.616 7.111 - 7.159: 98.8734% ( 1) 00:16:16.616 7.301 - 7.348: 98.8808% ( 1) 00:16:16.616 7.348 - 7.396: 98.8883% ( 1) 00:16:16.616 8.012 - 8.059: 98.8958% ( 1) 00:16:16.616 9.244 - 9.292: 98.9032% ( 1) 00:16:16.616 11.615 - 11.662: 98.9107% ( 1) 00:16:16.616 15.550 - 15.644: 98.9182% ( 1) 00:16:16.616 15.644 - 15.739: 98.9331% ( 2) 00:16:16.616 15.739 - 15.834: 98.9778% ( 6) 00:16:16.616 15.929 - 16.024: 99.0077% ( 4) 00:16:16.616 16.024 - 16.119: 99.0301% ( 3) 00:16:16.616 16.119 - 16.213: 99.0375% ( 1) 00:16:16.616 16.213 - 16.308: 99.0748% ( 5) 00:16:16.616 16.308 - 16.403: 99.0898% ( 2) 00:16:16.616 16.403 - 16.498: 99.1345% ( 6) 00:16:16.616 16.498 - 16.593: 99.2017% ( 9) 00:16:16.616 16.593 - 16.687: 99.2315% ( 4) 00:16:16.616 16.687 - 16.782: 9[2024-07-15 06:45:03.881633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:16.616 9.2614% ( 4) 00:16:16.616 16.782 - 16.877: 99.2688% ( 1) 00:16:16.616 16.877 - 16.972: 99.2987% ( 4) 00:16:16.616 16.972 - 17.067: 99.3136% ( 2) 00:16:16.616 17.067 - 17.161: 99.3285% ( 2) 00:16:16.616 17.256 - 17.351: 99.3360% ( 1) 00:16:16.616 17.541 - 17.636: 99.3434% ( 1) 00:16:16.616 17.636 - 17.730: 99.3584% ( 2) 00:16:16.616 17.730 - 17.825: 99.3658% ( 1) 00:16:16.616 18.204 - 18.299: 99.3807% ( 2) 00:16:16.616 18.299 - 18.394: 99.3957% ( 2) 00:16:16.616 20.954 - 21.049: 99.4031% ( 1) 00:16:16.616 2014.625 - 2026.761: 99.4106% ( 1) 00:16:16.616 2051.034 - 2063.170: 99.4180% ( 1) 00:16:16.616 3179.710 - 3203.982: 99.4255% ( 1) 00:16:16.616 3980.705 - 4004.978: 99.8806% ( 61) 00:16:16.616 4004.978 - 4029.250: 99.9851% ( 14) 00:16:16.616 5995.330 - 6019.603: 100.0000% ( 2) 00:16:16.616 00:16:16.616 06:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:16.616 06:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:16.616 06:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:16.616 06:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:16.616 06:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:16.616 [ 00:16:16.616 { 00:16:16.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:16.616 "subtype": "Discovery", 00:16:16.616 "listen_addresses": [], 00:16:16.616 "allow_any_host": true, 00:16:16.616 "hosts": [] 00:16:16.616 }, 00:16:16.616 { 00:16:16.616 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:16.616 "subtype": "NVMe", 00:16:16.616 "listen_addresses": [ 00:16:16.616 { 00:16:16.616 "trtype": "VFIOUSER", 00:16:16.616 "adrfam": "IPv4", 00:16:16.616 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:16.616 "trsvcid": "0" 00:16:16.616 } 00:16:16.616 ], 00:16:16.616 "allow_any_host": true, 00:16:16.616 "hosts": [], 00:16:16.616 "serial_number": "SPDK1", 00:16:16.616 "model_number": "SPDK bdev Controller", 00:16:16.616 "max_namespaces": 32, 00:16:16.616 "min_cntlid": 1, 00:16:16.616 "max_cntlid": 65519, 00:16:16.616 "namespaces": [ 00:16:16.616 { 00:16:16.616 "nsid": 1, 00:16:16.616 "bdev_name": "Malloc1", 00:16:16.616 "name": "Malloc1", 00:16:16.616 "nguid": "A8294F0E97074DD8A9D7462F8F544FF1", 00:16:16.616 "uuid": "a8294f0e-9707-4dd8-a9d7-462f8f544ff1" 00:16:16.616 }, 00:16:16.616 { 00:16:16.616 "nsid": 2, 00:16:16.616 "bdev_name": "Malloc3", 00:16:16.616 "name": "Malloc3", 00:16:16.616 "nguid": "988821E8FB6A4CB39D2B03F3CC7458FA", 00:16:16.616 "uuid": "988821e8-fb6a-4cb3-9d2b-03f3cc7458fa" 00:16:16.616 } 00:16:16.616 ] 00:16:16.616 }, 00:16:16.616 { 00:16:16.616 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:16.616 "subtype": "NVMe", 00:16:16.616 "listen_addresses": [ 00:16:16.616 { 00:16:16.616 "trtype": "VFIOUSER", 00:16:16.616 "adrfam": "IPv4", 00:16:16.616 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:16.616 "trsvcid": "0" 00:16:16.616 } 00:16:16.616 ], 00:16:16.616 "allow_any_host": true, 00:16:16.616 "hosts": [], 00:16:16.616 "serial_number": "SPDK2", 00:16:16.616 "model_number": "SPDK bdev Controller", 00:16:16.616 "max_namespaces": 32, 00:16:16.616 "min_cntlid": 1, 00:16:16.616 "max_cntlid": 65519, 00:16:16.616 "namespaces": [ 00:16:16.616 { 00:16:16.616 "nsid": 1, 00:16:16.616 "bdev_name": "Malloc2", 00:16:16.616 "name": "Malloc2", 00:16:16.616 "nguid": "014BD889B96A47CB9B9D7B78600D0547", 00:16:16.616 "uuid": "014bd889-b96a-47cb-9b9d-7b78600d0547" 00:16:16.616 } 00:16:16.616 ] 00:16:16.616 } 00:16:16.616 ] 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=608548 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:16.616 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:16.875 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.875 [2024-07-15 06:45:04.355416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.875 Malloc4 00:16:17.133 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:17.390 [2024-07-15 06:45:04.772594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.390 06:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:17.390 Asynchronous Event Request test 00:16:17.390 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.390 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.390 Registering asynchronous event callbacks... 00:16:17.390 Starting namespace attribute notice tests for all controllers... 00:16:17.390 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:17.390 aer_cb - Changed Namespace 00:16:17.390 Cleaning up... 00:16:17.649 [ 00:16:17.649 { 00:16:17.649 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:17.649 "subtype": "Discovery", 00:16:17.649 "listen_addresses": [], 00:16:17.649 "allow_any_host": true, 00:16:17.649 "hosts": [] 00:16:17.649 }, 00:16:17.649 { 00:16:17.649 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:17.649 "subtype": "NVMe", 00:16:17.649 "listen_addresses": [ 00:16:17.649 { 00:16:17.649 "trtype": "VFIOUSER", 00:16:17.649 "adrfam": "IPv4", 00:16:17.649 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:17.649 "trsvcid": "0" 00:16:17.649 } 00:16:17.649 ], 00:16:17.649 "allow_any_host": true, 00:16:17.649 "hosts": [], 00:16:17.649 "serial_number": "SPDK1", 00:16:17.649 "model_number": "SPDK bdev Controller", 00:16:17.649 "max_namespaces": 32, 00:16:17.649 "min_cntlid": 1, 00:16:17.649 "max_cntlid": 65519, 00:16:17.649 "namespaces": [ 00:16:17.649 { 00:16:17.649 "nsid": 1, 00:16:17.649 "bdev_name": "Malloc1", 00:16:17.649 "name": "Malloc1", 00:16:17.649 "nguid": "A8294F0E97074DD8A9D7462F8F544FF1", 00:16:17.649 "uuid": "a8294f0e-9707-4dd8-a9d7-462f8f544ff1" 00:16:17.649 }, 00:16:17.649 { 00:16:17.649 "nsid": 2, 00:16:17.649 "bdev_name": "Malloc3", 00:16:17.649 "name": "Malloc3", 00:16:17.649 "nguid": "988821E8FB6A4CB39D2B03F3CC7458FA", 00:16:17.649 "uuid": "988821e8-fb6a-4cb3-9d2b-03f3cc7458fa" 00:16:17.649 } 00:16:17.649 ] 00:16:17.649 }, 00:16:17.649 { 00:16:17.649 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:17.649 "subtype": "NVMe", 00:16:17.649 "listen_addresses": [ 00:16:17.649 { 00:16:17.649 "trtype": "VFIOUSER", 00:16:17.649 "adrfam": "IPv4", 00:16:17.649 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:17.649 "trsvcid": "0" 00:16:17.649 } 00:16:17.649 ], 00:16:17.649 "allow_any_host": true, 00:16:17.649 "hosts": [], 00:16:17.649 "serial_number": "SPDK2", 00:16:17.649 "model_number": "SPDK bdev Controller", 00:16:17.649 "max_namespaces": 32, 00:16:17.649 "min_cntlid": 1, 00:16:17.649 "max_cntlid": 65519, 00:16:17.649 "namespaces": [ 00:16:17.649 { 00:16:17.649 "nsid": 1, 00:16:17.649 "bdev_name": "Malloc2", 00:16:17.649 "name": "Malloc2", 00:16:17.649 "nguid": "014BD889B96A47CB9B9D7B78600D0547", 00:16:17.649 "uuid": "014bd889-b96a-47cb-9b9d-7b78600d0547" 00:16:17.649 }, 00:16:17.649 { 00:16:17.649 "nsid": 2, 00:16:17.649 "bdev_name": "Malloc4", 00:16:17.649 "name": "Malloc4", 00:16:17.649 "nguid": "EE04071021A94C459190B7548B38869C", 00:16:17.649 "uuid": "ee040710-21a9-4c45-9190-b7548b38869c" 00:16:17.649 } 00:16:17.649 ] 00:16:17.649 } 00:16:17.649 ] 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 608548 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 602958 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 602958 ']' 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 602958 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 602958 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.649 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.650 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 602958' 00:16:17.650 killing process with pid 602958 00:16:17.650 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 602958 00:16:17.650 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 602958 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=608722 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 608722' 00:16:17.907 Process pid: 608722 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 608722 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 608722 ']' 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:17.907 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.908 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:17.908 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:17.908 [2024-07-15 06:45:05.438150] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:17.908 [2024-07-15 06:45:05.439269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:17.908 [2024-07-15 06:45:05.439331] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.908 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.908 [2024-07-15 06:45:05.506037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.166 [2024-07-15 06:45:05.597094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.166 [2024-07-15 06:45:05.597157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.166 [2024-07-15 06:45:05.597174] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.166 [2024-07-15 06:45:05.597187] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.166 [2024-07-15 06:45:05.597199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.166 [2024-07-15 06:45:05.597289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.166 [2024-07-15 06:45:05.597348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.166 [2024-07-15 06:45:05.597464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.166 [2024-07-15 06:45:05.597467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.166 [2024-07-15 06:45:05.698375] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:18.166 [2024-07-15 06:45:05.698603] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:18.166 [2024-07-15 06:45:05.698898] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:18.166 [2024-07-15 06:45:05.699564] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:18.166 [2024-07-15 06:45:05.699799] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:18.166 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.166 06:45:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:18.166 06:45:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:19.542 06:45:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:19.542 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:19.542 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:19.542 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:19.542 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:19.542 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:19.801 Malloc1 00:16:19.801 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:20.059 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:20.316 06:45:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:20.574 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:20.574 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:20.574 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:20.832 Malloc2 00:16:20.832 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:21.091 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:21.349 06:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 608722 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 608722 ']' 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 608722 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 608722 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 608722' 00:16:21.607 killing process with pid 608722 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 608722 00:16:21.607 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 608722 00:16:21.865 06:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:21.865 06:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:21.865 00:16:21.865 real 0m52.748s 00:16:21.865 user 3m28.259s 00:16:21.865 sys 0m4.513s 00:16:21.865 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.865 06:45:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:21.865 ************************************ 00:16:21.865 END TEST nvmf_vfio_user 00:16:21.865 ************************************ 00:16:21.865 06:45:09 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:21.865 06:45:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:21.865 06:45:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:21.865 06:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 ************************************ 00:16:22.125 START TEST nvmf_vfio_user_nvme_compliance 00:16:22.125 ************************************ 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:22.125 * Looking for test storage... 00:16:22.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=609789 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 609789' 00:16:22.126 Process pid: 609789 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 609789 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 609789 ']' 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:22.126 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:22.126 [2024-07-15 06:45:09.598651] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:22.126 [2024-07-15 06:45:09.598734] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.126 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.126 [2024-07-15 06:45:09.657602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:22.416 [2024-07-15 06:45:09.744908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.416 [2024-07-15 06:45:09.744973] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.416 [2024-07-15 06:45:09.744988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.416 [2024-07-15 06:45:09.744999] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.416 [2024-07-15 06:45:09.745010] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.416 [2024-07-15 06:45:09.745069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.416 [2024-07-15 06:45:09.747896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.416 [2024-07-15 06:45:09.747908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.416 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:22.416 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:22.416 06:45:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 malloc0 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 06:45:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:23.615 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.615 00:16:23.615 00:16:23.615 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.615 http://cunit.sourceforge.net/ 00:16:23.615 00:16:23.615 00:16:23.615 Suite: nvme_compliance 00:16:23.615 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 06:45:11.084398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.615 [2024-07-15 06:45:11.085815] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:23.615 [2024-07-15 06:45:11.085838] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:23.615 [2024-07-15 06:45:11.085882] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:23.615 [2024-07-15 06:45:11.087418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.615 passed 00:16:23.615 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 06:45:11.175024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.615 [2024-07-15 06:45:11.178045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.615 passed 00:16:23.874 Test: admin_identify_ns ...[2024-07-15 06:45:11.268403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.874 [2024-07-15 06:45:11.328897] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:23.874 [2024-07-15 06:45:11.336896] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:23.874 [2024-07-15 06:45:11.358023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.874 passed 00:16:23.874 Test: admin_get_features_mandatory_features ...[2024-07-15 06:45:11.442812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.874 [2024-07-15 06:45:11.445835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.874 passed 00:16:24.134 Test: admin_get_features_optional_features ...[2024-07-15 06:45:11.531400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.134 [2024-07-15 06:45:11.536435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.134 passed 00:16:24.134 Test: admin_set_features_number_of_queues ...[2024-07-15 06:45:11.624894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.134 [2024-07-15 06:45:11.727994] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.394 passed 00:16:24.394 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 06:45:11.814558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.394 [2024-07-15 06:45:11.817582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.394 passed 00:16:24.394 Test: admin_get_log_page_with_lpo ...[2024-07-15 06:45:11.900714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.394 [2024-07-15 06:45:11.971916] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:24.394 [2024-07-15 06:45:11.984951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.654 passed 00:16:24.654 Test: fabric_property_get ...[2024-07-15 06:45:12.069183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.654 [2024-07-15 06:45:12.070452] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:24.654 [2024-07-15 06:45:12.072207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.654 passed 00:16:24.654 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 06:45:12.158748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.654 [2024-07-15 06:45:12.160059] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:24.654 [2024-07-15 06:45:12.161767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.654 passed 00:16:24.654 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 06:45:12.248948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.913 [2024-07-15 06:45:12.334886] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:24.913 [2024-07-15 06:45:12.350890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:24.913 [2024-07-15 06:45:12.355980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.913 passed 00:16:24.913 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 06:45:12.438582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.913 [2024-07-15 06:45:12.439885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:24.913 [2024-07-15 06:45:12.441600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.913 passed 00:16:24.913 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 06:45:12.525051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.172 [2024-07-15 06:45:12.601886] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:25.172 [2024-07-15 06:45:12.625885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:25.172 [2024-07-15 06:45:12.631004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.172 passed 00:16:25.172 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 06:45:12.713198] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.172 [2024-07-15 06:45:12.714482] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:25.172 [2024-07-15 06:45:12.714534] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:25.172 [2024-07-15 06:45:12.716217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.172 passed 00:16:25.432 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 06:45:12.805512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.432 [2024-07-15 06:45:12.897886] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:25.432 [2024-07-15 06:45:12.905884] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:25.432 [2024-07-15 06:45:12.913885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:25.432 [2024-07-15 06:45:12.921885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:25.432 [2024-07-15 06:45:12.950980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.432 passed 00:16:25.432 Test: admin_create_io_sq_verify_pc ...[2024-07-15 06:45:13.033579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.690 [2024-07-15 06:45:13.048903] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:25.691 [2024-07-15 06:45:13.066957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.691 passed 00:16:25.691 Test: admin_create_io_qp_max_qps ...[2024-07-15 06:45:13.151518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.068 [2024-07-15 06:45:14.272894] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:27.068 [2024-07-15 06:45:14.659856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.329 passed 00:16:27.329 Test: admin_create_io_sq_shared_cq ...[2024-07-15 06:45:14.745067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.329 [2024-07-15 06:45:14.877888] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:27.329 [2024-07-15 06:45:14.914974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.588 passed 00:16:27.588 00:16:27.588 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.588 suites 1 1 n/a 0 0 00:16:27.588 tests 18 18 18 0 0 00:16:27.588 asserts 360 360 360 0 n/a 00:16:27.588 00:16:27.588 Elapsed time = 1.592 seconds 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 609789 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 609789 ']' 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 609789 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 609789 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 609789' 00:16:27.588 killing process with pid 609789 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 609789 00:16:27.588 06:45:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 609789 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:27.847 00:16:27.847 real 0m5.761s 00:16:27.847 user 0m16.238s 00:16:27.847 sys 0m0.557s 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.847 ************************************ 00:16:27.847 END TEST nvmf_vfio_user_nvme_compliance 00:16:27.847 ************************************ 00:16:27.847 06:45:15 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:27.847 06:45:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:27.847 06:45:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.847 06:45:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.847 ************************************ 00:16:27.847 START TEST nvmf_vfio_user_fuzz 00:16:27.847 ************************************ 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:27.847 * Looking for test storage... 00:16:27.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.847 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=610510 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 610510' 00:16:27.848 Process pid: 610510 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 610510 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 610510 ']' 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.848 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.107 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.107 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:28.107 06:45:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 malloc0 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:29.485 06:45:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:01.552 Fuzzing completed. Shutting down the fuzz application 00:17:01.552 00:17:01.552 Dumping successful admin opcodes: 00:17:01.552 8, 9, 10, 24, 00:17:01.552 Dumping successful io opcodes: 00:17:01.552 0, 00:17:01.552 NS: 0x200003a1ef00 I/O qp, Total commands completed: 599177, total successful commands: 2317, random_seed: 309803648 00:17:01.552 NS: 0x200003a1ef00 admin qp, Total commands completed: 84525, total successful commands: 672, random_seed: 3118444800 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 610510 ']' 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 610510' 00:17:01.552 killing process with pid 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 610510 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:01.552 00:17:01.552 real 0m32.175s 00:17:01.552 user 0m31.457s 00:17:01.552 sys 0m29.203s 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:01.552 06:45:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.552 ************************************ 00:17:01.552 END TEST nvmf_vfio_user_fuzz 00:17:01.552 ************************************ 00:17:01.552 06:45:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:01.552 06:45:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:01.552 06:45:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:01.552 06:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.552 ************************************ 00:17:01.552 START TEST nvmf_host_management 00:17:01.552 ************************************ 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:01.552 * Looking for test storage... 00:17:01.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.552 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.553 06:45:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:02.122 00:17:02.122 --- 10.0.0.2 ping statistics --- 00:17:02.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.122 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:02.122 00:17:02.122 --- 10.0.0.1 ping statistics --- 00:17:02.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.122 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=615946 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 615946 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 615946 ']' 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.122 06:45:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 [2024-07-15 06:45:49.727011] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:02.123 [2024-07-15 06:45:49.727102] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.380 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.380 [2024-07-15 06:45:49.797774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.380 [2024-07-15 06:45:49.893614] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.380 [2024-07-15 06:45:49.893675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.380 [2024-07-15 06:45:49.893700] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.380 [2024-07-15 06:45:49.893714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.380 [2024-07-15 06:45:49.893725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.380 [2024-07-15 06:45:49.893819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.380 [2024-07-15 06:45:49.893922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.380 [2024-07-15 06:45:49.893983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.380 [2024-07-15 06:45:49.893985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.638 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 [2024-07-15 06:45:50.036510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 Malloc0 00:17:02.639 [2024-07-15 06:45:50.095971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=615996 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 615996 /var/tmp/bdevperf.sock 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 615996 ']' 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.639 { 00:17:02.639 "params": { 00:17:02.639 "name": "Nvme$subsystem", 00:17:02.639 "trtype": "$TEST_TRANSPORT", 00:17:02.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.639 "adrfam": "ipv4", 00:17:02.639 "trsvcid": "$NVMF_PORT", 00:17:02.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.639 "hdgst": ${hdgst:-false}, 00:17:02.639 "ddgst": ${ddgst:-false} 00:17:02.639 }, 00:17:02.639 "method": "bdev_nvme_attach_controller" 00:17:02.639 } 00:17:02.639 EOF 00:17:02.639 )") 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:02.639 06:45:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.639 "params": { 00:17:02.639 "name": "Nvme0", 00:17:02.639 "trtype": "tcp", 00:17:02.639 "traddr": "10.0.0.2", 00:17:02.639 "adrfam": "ipv4", 00:17:02.639 "trsvcid": "4420", 00:17:02.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:02.639 "hdgst": false, 00:17:02.639 "ddgst": false 00:17:02.639 }, 00:17:02.639 "method": "bdev_nvme_attach_controller" 00:17:02.639 }' 00:17:02.639 [2024-07-15 06:45:50.168013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:02.639 [2024-07-15 06:45:50.168089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615996 ] 00:17:02.639 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.639 [2024-07-15 06:45:50.235440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.898 [2024-07-15 06:45:50.324827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.157 Running I/O for 10 seconds... 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.157 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.158 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.158 06:45:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.158 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:17:03.158 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:17:03.158 06:45:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.416 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.675 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.675 [2024-07-15 06:45:51.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.051971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.051987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.676 [2024-07-15 06:45:51.052747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.676 [2024-07-15 06:45:51.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.052975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.052992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.677 [2024-07-15 06:45:51.053492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.053529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:03.677 [2024-07-15 06:45:51.053602] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a7110 was disconnected and freed. reset controller. 00:17:03.677 [2024-07-15 06:45:51.054733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.677 task offset: 73728 on job bdev=Nvme0n1 fails 00:17:03.677 00:17:03.677 Latency(us) 00:17:03.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.677 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.677 Job: Nvme0n1 ended in about 0.38 seconds with error 00:17:03.677 Verification LBA range: start 0x0 length 0x400 00:17:03.677 Nvme0n1 : 0.38 1508.03 94.25 167.56 0.00 37068.48 4393.34 34564.17 00:17:03.677 =================================================================================================================== 00:17:03.677 Total : 1508.03 94.25 167.56 0.00 37068.48 4393.34 34564.17 00:17:03.677 [2024-07-15 06:45:51.056647] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:03.677 [2024-07-15 06:45:51.056676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d961e0 (9): Bad file descriptor 00:17:03.677 [2024-07-15 06:45:51.058694] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:03.677 [2024-07-15 06:45:51.058894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:03.677 [2024-07-15 06:45:51.058929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.677 [2024-07-15 06:45:51.058957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:03.677 [2024-07-15 06:45:51.058973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:03.677 [2024-07-15 06:45:51.058987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:03.677 [2024-07-15 06:45:51.058998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d961e0 00:17:03.677 [2024-07-15 06:45:51.059032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d961e0 (9): Bad file descriptor 00:17:03.677 [2024-07-15 06:45:51.059057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:03.677 [2024-07-15 06:45:51.059072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:03.677 [2024-07-15 06:45:51.059087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:03.677 [2024-07-15 06:45:51.059107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.677 06:45:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 615996 00:17:04.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (615996) - No such process 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.647 { 00:17:04.647 "params": { 00:17:04.647 "name": "Nvme$subsystem", 00:17:04.647 "trtype": "$TEST_TRANSPORT", 00:17:04.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.647 "adrfam": "ipv4", 00:17:04.647 "trsvcid": "$NVMF_PORT", 00:17:04.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.647 "hdgst": ${hdgst:-false}, 00:17:04.647 "ddgst": ${ddgst:-false} 00:17:04.647 }, 00:17:04.647 "method": "bdev_nvme_attach_controller" 00:17:04.647 } 00:17:04.647 EOF 00:17:04.647 )") 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:04.647 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:04.648 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:04.648 06:45:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.648 "params": { 00:17:04.648 "name": "Nvme0", 00:17:04.648 "trtype": "tcp", 00:17:04.648 "traddr": "10.0.0.2", 00:17:04.648 "adrfam": "ipv4", 00:17:04.648 "trsvcid": "4420", 00:17:04.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:04.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:04.648 "hdgst": false, 00:17:04.648 "ddgst": false 00:17:04.648 }, 00:17:04.648 "method": "bdev_nvme_attach_controller" 00:17:04.648 }' 00:17:04.648 [2024-07-15 06:45:52.113967] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:04.648 [2024-07-15 06:45:52.114051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616267 ] 00:17:04.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.648 [2024-07-15 06:45:52.175816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.908 [2024-07-15 06:45:52.264315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.908 Running I/O for 1 seconds... 00:17:06.286 00:17:06.286 Latency(us) 00:17:06.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.286 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.286 Verification LBA range: start 0x0 length 0x400 00:17:06.286 Nvme0n1 : 1.01 1605.63 100.35 0.00 0.00 39038.87 2839.89 33399.09 00:17:06.286 =================================================================================================================== 00:17:06.286 Total : 1605.63 100.35 0.00 0.00 39038.87 2839.89 33399.09 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.286 rmmod nvme_tcp 00:17:06.286 rmmod nvme_fabrics 00:17:06.286 rmmod nvme_keyring 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 615946 ']' 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 615946 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 615946 ']' 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 615946 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 615946 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 615946' 00:17:06.286 killing process with pid 615946 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 615946 00:17:06.286 06:45:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 615946 00:17:06.545 [2024-07-15 06:45:54.039438] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.545 06:45:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.077 06:45:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.077 06:45:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:09.077 00:17:09.077 real 0m8.577s 00:17:09.077 user 0m19.669s 00:17:09.077 sys 0m2.535s 00:17:09.077 06:45:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.077 06:45:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.077 ************************************ 00:17:09.077 END TEST nvmf_host_management 00:17:09.077 ************************************ 00:17:09.077 06:45:56 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:09.077 06:45:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:09.077 06:45:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.077 06:45:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.077 ************************************ 00:17:09.077 START TEST nvmf_lvol 00:17:09.077 ************************************ 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:09.078 * Looking for test storage... 00:17:09.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.078 06:45:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.987 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:17:10.988 00:17:10.988 --- 10.0.0.2 ping statistics --- 00:17:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.988 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:10.988 00:17:10.988 --- 10.0.0.1 ping statistics --- 00:17:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.988 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=618467 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 618467 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 618467 ']' 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 [2024-07-15 06:45:58.322055] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:10.988 [2024-07-15 06:45:58.322130] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.988 [2024-07-15 06:45:58.391636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.988 [2024-07-15 06:45:58.481021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.988 [2024-07-15 06:45:58.481085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.988 [2024-07-15 06:45:58.481110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.988 [2024-07-15 06:45:58.481123] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.988 [2024-07-15 06:45:58.481143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.988 [2024-07-15 06:45:58.481226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.988 [2024-07-15 06:45:58.481305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.988 [2024-07-15 06:45:58.481285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.988 06:45:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:11.248 06:45:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.248 06:45:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:11.248 [2024-07-15 06:45:58.838832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.248 06:45:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.816 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:11.816 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.816 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:11.816 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:12.074 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:12.331 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f43f2e28-e27f-4099-bbcc-58f79ea79ccb 00:17:12.331 06:45:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f43f2e28-e27f-4099-bbcc-58f79ea79ccb lvol 20 00:17:12.589 06:46:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d46d5a96-e918-438a-9b55-adf8a4c5811d 00:17:12.589 06:46:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:12.847 06:46:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d46d5a96-e918-438a-9b55-adf8a4c5811d 00:17:13.105 06:46:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.363 [2024-07-15 06:46:00.921453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.363 06:46:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.620 06:46:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=618767 00:17:13.620 06:46:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:13.620 06:46:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:13.620 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.997 06:46:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d46d5a96-e918-438a-9b55-adf8a4c5811d MY_SNAPSHOT 00:17:14.997 06:46:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0b57d74f-f268-49bd-a6c0-31295623180c 00:17:14.997 06:46:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d46d5a96-e918-438a-9b55-adf8a4c5811d 30 00:17:15.254 06:46:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0b57d74f-f268-49bd-a6c0-31295623180c MY_CLONE 00:17:15.512 06:46:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7488bc20-27c6-4f59-b299-28bf7698994a 00:17:15.512 06:46:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7488bc20-27c6-4f59-b299-28bf7698994a 00:17:16.079 06:46:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 618767 00:17:24.194 Initializing NVMe Controllers 00:17:24.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:24.194 Controller IO queue size 128, less than required. 00:17:24.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:24.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:24.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:24.194 Initialization complete. Launching workers. 00:17:24.194 ======================================================== 00:17:24.194 Latency(us) 00:17:24.194 Device Information : IOPS MiB/s Average min max 00:17:24.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10543.00 41.18 12144.14 2023.67 69701.46 00:17:24.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10481.30 40.94 12221.37 2101.74 73369.15 00:17:24.194 ======================================================== 00:17:24.194 Total : 21024.30 82.13 12182.64 2023.67 73369.15 00:17:24.194 00:17:24.194 06:46:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.452 06:46:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d46d5a96-e918-438a-9b55-adf8a4c5811d 00:17:24.711 06:46:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f43f2e28-e27f-4099-bbcc-58f79ea79ccb 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.970 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.971 rmmod nvme_tcp 00:17:24.971 rmmod nvme_fabrics 00:17:24.971 rmmod nvme_keyring 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 618467 ']' 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 618467 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 618467 ']' 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 618467 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 618467 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 618467' 00:17:24.971 killing process with pid 618467 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 618467 00:17:24.971 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 618467 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.229 06:46:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.172 06:46:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.172 00:17:27.172 real 0m18.622s 00:17:27.172 user 1m4.294s 00:17:27.172 sys 0m5.233s 00:17:27.172 06:46:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:27.172 06:46:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:27.172 ************************************ 00:17:27.172 END TEST nvmf_lvol 00:17:27.172 ************************************ 00:17:27.431 06:46:14 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.431 06:46:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:27.431 06:46:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:27.431 06:46:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 ************************************ 00:17:27.431 START TEST nvmf_lvs_grow 00:17:27.431 ************************************ 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.431 * Looking for test storage... 00:17:27.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.431 06:46:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.341 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:17:29.341 00:17:29.341 --- 10.0.0.2 ping statistics --- 00:17:29.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.341 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:17:29.341 00:17:29.341 --- 10.0.0.1 ping statistics --- 00:17:29.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.341 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.341 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=622027 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 622027 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 622027 ']' 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:29.601 06:46:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.601 [2024-07-15 06:46:17.027361] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:29.601 [2024-07-15 06:46:17.027461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.602 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.602 [2024-07-15 06:46:17.097775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.602 [2024-07-15 06:46:17.186382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.602 [2024-07-15 06:46:17.186454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.602 [2024-07-15 06:46:17.186479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.602 [2024-07-15 06:46:17.186493] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.602 [2024-07-15 06:46:17.186504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.602 [2024-07-15 06:46:17.186553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.860 06:46:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.119 [2024-07-15 06:46:17.575301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.119 ************************************ 00:17:30.119 START TEST lvs_grow_clean 00:17:30.119 ************************************ 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.119 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.377 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:30.377 06:46:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:30.635 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=951f18b3-430a-4a47-af00-1d0854a88b88 00:17:30.635 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:30.635 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:30.893 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:30.893 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:30.893 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 951f18b3-430a-4a47-af00-1d0854a88b88 lvol 150 00:17:31.153 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b04ee2dc-c36a-4a32-92f4-183fb6b85c88 00:17:31.153 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.153 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:31.412 [2024-07-15 06:46:18.960071] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:31.412 [2024-07-15 06:46:18.960160] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:31.412 true 00:17:31.412 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:31.412 06:46:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:31.671 06:46:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:31.671 06:46:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:31.929 06:46:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b04ee2dc-c36a-4a32-92f4-183fb6b85c88 00:17:32.189 06:46:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:32.450 [2024-07-15 06:46:20.027645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.450 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=622462 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 622462 /var/tmp/bdevperf.sock 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 622462 ']' 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.708 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 [2024-07-15 06:46:20.329322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:32.966 [2024-07-15 06:46:20.329400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622462 ] 00:17:32.966 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.966 [2024-07-15 06:46:20.391772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.966 [2024-07-15 06:46:20.482495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.224 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.224 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:33.224 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:33.482 Nvme0n1 00:17:33.482 06:46:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:33.742 [ 00:17:33.742 { 00:17:33.742 "name": "Nvme0n1", 00:17:33.742 "aliases": [ 00:17:33.742 "b04ee2dc-c36a-4a32-92f4-183fb6b85c88" 00:17:33.742 ], 00:17:33.742 "product_name": "NVMe disk", 00:17:33.742 "block_size": 4096, 00:17:33.742 "num_blocks": 38912, 00:17:33.742 "uuid": "b04ee2dc-c36a-4a32-92f4-183fb6b85c88", 00:17:33.742 "assigned_rate_limits": { 00:17:33.742 "rw_ios_per_sec": 0, 00:17:33.742 "rw_mbytes_per_sec": 0, 00:17:33.742 "r_mbytes_per_sec": 0, 00:17:33.742 "w_mbytes_per_sec": 0 00:17:33.742 }, 00:17:33.742 "claimed": false, 00:17:33.742 "zoned": false, 00:17:33.742 "supported_io_types": { 00:17:33.742 "read": true, 00:17:33.742 "write": true, 00:17:33.742 "unmap": true, 00:17:33.742 "write_zeroes": true, 00:17:33.742 "flush": true, 00:17:33.742 "reset": true, 00:17:33.742 "compare": true, 00:17:33.742 "compare_and_write": true, 00:17:33.742 "abort": true, 00:17:33.742 "nvme_admin": true, 00:17:33.742 "nvme_io": true 00:17:33.742 }, 00:17:33.742 "memory_domains": [ 00:17:33.742 { 00:17:33.742 "dma_device_id": "system", 00:17:33.742 "dma_device_type": 1 00:17:33.742 } 00:17:33.742 ], 00:17:33.742 "driver_specific": { 00:17:33.742 "nvme": [ 00:17:33.742 { 00:17:33.742 "trid": { 00:17:33.742 "trtype": "TCP", 00:17:33.742 "adrfam": "IPv4", 00:17:33.742 "traddr": "10.0.0.2", 00:17:33.742 "trsvcid": "4420", 00:17:33.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.742 }, 00:17:33.742 "ctrlr_data": { 00:17:33.742 "cntlid": 1, 00:17:33.742 "vendor_id": "0x8086", 00:17:33.742 "model_number": "SPDK bdev Controller", 00:17:33.742 "serial_number": "SPDK0", 00:17:33.742 "firmware_revision": "24.05.1", 00:17:33.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.742 "oacs": { 00:17:33.742 "security": 0, 00:17:33.742 "format": 0, 00:17:33.742 "firmware": 0, 00:17:33.742 "ns_manage": 0 00:17:33.742 }, 00:17:33.742 "multi_ctrlr": true, 00:17:33.742 "ana_reporting": false 00:17:33.742 }, 00:17:33.742 "vs": { 00:17:33.742 "nvme_version": "1.3" 00:17:33.742 }, 00:17:33.742 "ns_data": { 00:17:33.742 "id": 1, 00:17:33.742 "can_share": true 00:17:33.742 } 00:17:33.742 } 00:17:33.742 ], 00:17:33.742 "mp_policy": "active_passive" 00:17:33.742 } 00:17:33.742 } 00:17:33.742 ] 00:17:33.742 06:46:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=622598 00:17:33.742 06:46:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:33.742 06:46:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.742 Running I/O for 10 seconds... 00:17:35.123 Latency(us) 00:17:35.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.123 Nvme0n1 : 1.00 14616.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:35.123 =================================================================================================================== 00:17:35.123 Total : 14616.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:35.123 00:17:35.690 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:35.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.947 Nvme0n1 : 2.00 14992.50 58.56 0.00 0.00 0.00 0.00 0.00 00:17:35.947 =================================================================================================================== 00:17:35.947 Total : 14992.50 58.56 0.00 0.00 0.00 0.00 0.00 00:17:35.947 00:17:35.947 true 00:17:35.947 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:35.947 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:36.207 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:36.207 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:36.207 06:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 622598 00:17:36.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.775 Nvme0n1 : 3.00 15182.67 59.31 0.00 0.00 0.00 0.00 0.00 00:17:36.775 =================================================================================================================== 00:17:36.775 Total : 15182.67 59.31 0.00 0.00 0.00 0.00 0.00 00:17:36.775 00:17:37.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.712 Nvme0n1 : 4.00 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:17:37.712 =================================================================================================================== 00:17:37.712 Total : 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:17:37.712 00:17:39.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.092 Nvme0n1 : 5.00 15160.20 59.22 0.00 0.00 0.00 0.00 0.00 00:17:39.092 =================================================================================================================== 00:17:39.092 Total : 15160.20 59.22 0.00 0.00 0.00 0.00 0.00 00:17:39.092 00:17:40.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.030 Nvme0n1 : 6.00 15221.50 59.46 0.00 0.00 0.00 0.00 0.00 00:17:40.030 =================================================================================================================== 00:17:40.030 Total : 15221.50 59.46 0.00 0.00 0.00 0.00 0.00 00:17:40.030 00:17:40.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.964 Nvme0n1 : 7.00 15244.57 59.55 0.00 0.00 0.00 0.00 0.00 00:17:40.964 =================================================================================================================== 00:17:40.964 Total : 15244.57 59.55 0.00 0.00 0.00 0.00 0.00 00:17:40.964 00:17:41.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.898 Nvme0n1 : 8.00 15317.00 59.83 0.00 0.00 0.00 0.00 0.00 00:17:41.898 =================================================================================================================== 00:17:41.898 Total : 15317.00 59.83 0.00 0.00 0.00 0.00 0.00 00:17:41.898 00:17:42.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.860 Nvme0n1 : 9.00 15295.00 59.75 0.00 0.00 0.00 0.00 0.00 00:17:42.860 =================================================================================================================== 00:17:42.860 Total : 15295.00 59.75 0.00 0.00 0.00 0.00 0.00 00:17:42.860 00:17:43.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.792 Nvme0n1 : 10.00 15289.50 59.72 0.00 0.00 0.00 0.00 0.00 00:17:43.792 =================================================================================================================== 00:17:43.792 Total : 15289.50 59.72 0.00 0.00 0.00 0.00 0.00 00:17:43.792 00:17:43.792 00:17:43.792 Latency(us) 00:17:43.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.792 Nvme0n1 : 10.01 15292.27 59.74 0.00 0.00 8365.49 3446.71 16505.36 00:17:43.792 =================================================================================================================== 00:17:43.792 Total : 15292.27 59.74 0.00 0.00 8365.49 3446.71 16505.36 00:17:43.792 0 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 622462 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 622462 ']' 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 622462 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 622462 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 622462' 00:17:43.792 killing process with pid 622462 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 622462 00:17:43.792 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.792 00:17:43.792 Latency(us) 00:17:43.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.792 =================================================================================================================== 00:17:43.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.792 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 622462 00:17:44.049 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:44.307 06:46:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:44.872 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:44.872 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:44.872 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:44.872 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:44.872 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.130 [2024-07-15 06:46:32.717716] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.389 06:46:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:45.389 request: 00:17:45.389 { 00:17:45.389 "uuid": "951f18b3-430a-4a47-af00-1d0854a88b88", 00:17:45.389 "method": "bdev_lvol_get_lvstores", 00:17:45.389 "req_id": 1 00:17:45.389 } 00:17:45.389 Got JSON-RPC error response 00:17:45.389 response: 00:17:45.389 { 00:17:45.389 "code": -19, 00:17:45.389 "message": "No such device" 00:17:45.389 } 00:17:45.389 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:45.389 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.389 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.389 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.389 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.647 aio_bdev 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b04ee2dc-c36a-4a32-92f4-183fb6b85c88 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=b04ee2dc-c36a-4a32-92f4-183fb6b85c88 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:45.905 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.162 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b04ee2dc-c36a-4a32-92f4-183fb6b85c88 -t 2000 00:17:46.162 [ 00:17:46.162 { 00:17:46.162 "name": "b04ee2dc-c36a-4a32-92f4-183fb6b85c88", 00:17:46.162 "aliases": [ 00:17:46.162 "lvs/lvol" 00:17:46.162 ], 00:17:46.162 "product_name": "Logical Volume", 00:17:46.162 "block_size": 4096, 00:17:46.162 "num_blocks": 38912, 00:17:46.162 "uuid": "b04ee2dc-c36a-4a32-92f4-183fb6b85c88", 00:17:46.162 "assigned_rate_limits": { 00:17:46.162 "rw_ios_per_sec": 0, 00:17:46.162 "rw_mbytes_per_sec": 0, 00:17:46.162 "r_mbytes_per_sec": 0, 00:17:46.162 "w_mbytes_per_sec": 0 00:17:46.162 }, 00:17:46.162 "claimed": false, 00:17:46.162 "zoned": false, 00:17:46.162 "supported_io_types": { 00:17:46.162 "read": true, 00:17:46.162 "write": true, 00:17:46.162 "unmap": true, 00:17:46.162 "write_zeroes": true, 00:17:46.162 "flush": false, 00:17:46.162 "reset": true, 00:17:46.162 "compare": false, 00:17:46.162 "compare_and_write": false, 00:17:46.162 "abort": false, 00:17:46.162 "nvme_admin": false, 00:17:46.162 "nvme_io": false 00:17:46.162 }, 00:17:46.162 "driver_specific": { 00:17:46.162 "lvol": { 00:17:46.162 "lvol_store_uuid": "951f18b3-430a-4a47-af00-1d0854a88b88", 00:17:46.162 "base_bdev": "aio_bdev", 00:17:46.162 "thin_provision": false, 00:17:46.162 "num_allocated_clusters": 38, 00:17:46.162 "snapshot": false, 00:17:46.162 "clone": false, 00:17:46.162 "esnap_clone": false 00:17:46.162 } 00:17:46.162 } 00:17:46.162 } 00:17:46.162 ] 00:17:46.163 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:46.163 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:46.163 06:46:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:46.420 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:46.420 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:46.420 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:46.678 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:46.678 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b04ee2dc-c36a-4a32-92f4-183fb6b85c88 00:17:46.937 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 951f18b3-430a-4a47-af00-1d0854a88b88 00:17:47.195 06:46:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.761 00:17:47.761 real 0m17.481s 00:17:47.761 user 0m16.854s 00:17:47.761 sys 0m1.973s 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.761 ************************************ 00:17:47.761 END TEST lvs_grow_clean 00:17:47.761 ************************************ 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.761 ************************************ 00:17:47.761 START TEST lvs_grow_dirty 00:17:47.761 ************************************ 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.761 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:48.019 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:48.019 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:48.278 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=04c451d5-aa23-44ff-88f1-49415a795db4 00:17:48.278 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:17:48.278 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:48.537 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:48.537 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:48.537 06:46:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04c451d5-aa23-44ff-88f1-49415a795db4 lvol 150 00:17:48.795 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:17:48.795 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:48.795 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:49.054 [2024-07-15 06:46:36.460108] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:49.054 [2024-07-15 06:46:36.460219] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:49.054 true 00:17:49.054 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:17:49.054 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:49.311 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:49.311 06:46:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:49.569 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:17:49.827 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:50.085 [2024-07-15 06:46:37.471240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.085 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=624515 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 624515 /var/tmp/bdevperf.sock 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 624515 ']' 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.344 06:46:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:50.344 [2024-07-15 06:46:37.813373] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:50.344 [2024-07-15 06:46:37.813440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624515 ] 00:17:50.344 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.344 [2024-07-15 06:46:37.878081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.602 [2024-07-15 06:46:37.976733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.602 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.602 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:50.602 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:50.861 Nvme0n1 00:17:50.861 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:51.119 [ 00:17:51.119 { 00:17:51.119 "name": "Nvme0n1", 00:17:51.119 "aliases": [ 00:17:51.119 "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8" 00:17:51.119 ], 00:17:51.119 "product_name": "NVMe disk", 00:17:51.119 "block_size": 4096, 00:17:51.119 "num_blocks": 38912, 00:17:51.119 "uuid": "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8", 00:17:51.119 "assigned_rate_limits": { 00:17:51.119 "rw_ios_per_sec": 0, 00:17:51.119 "rw_mbytes_per_sec": 0, 00:17:51.119 "r_mbytes_per_sec": 0, 00:17:51.119 "w_mbytes_per_sec": 0 00:17:51.119 }, 00:17:51.119 "claimed": false, 00:17:51.119 "zoned": false, 00:17:51.119 "supported_io_types": { 00:17:51.119 "read": true, 00:17:51.119 "write": true, 00:17:51.119 "unmap": true, 00:17:51.119 "write_zeroes": true, 00:17:51.119 "flush": true, 00:17:51.119 "reset": true, 00:17:51.119 "compare": true, 00:17:51.119 "compare_and_write": true, 00:17:51.119 "abort": true, 00:17:51.119 "nvme_admin": true, 00:17:51.119 "nvme_io": true 00:17:51.119 }, 00:17:51.119 "memory_domains": [ 00:17:51.119 { 00:17:51.119 "dma_device_id": "system", 00:17:51.119 "dma_device_type": 1 00:17:51.119 } 00:17:51.119 ], 00:17:51.119 "driver_specific": { 00:17:51.119 "nvme": [ 00:17:51.119 { 00:17:51.119 "trid": { 00:17:51.119 "trtype": "TCP", 00:17:51.119 "adrfam": "IPv4", 00:17:51.119 "traddr": "10.0.0.2", 00:17:51.119 "trsvcid": "4420", 00:17:51.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:51.119 }, 00:17:51.119 "ctrlr_data": { 00:17:51.119 "cntlid": 1, 00:17:51.119 "vendor_id": "0x8086", 00:17:51.119 "model_number": "SPDK bdev Controller", 00:17:51.119 "serial_number": "SPDK0", 00:17:51.119 "firmware_revision": "24.05.1", 00:17:51.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:51.119 "oacs": { 00:17:51.119 "security": 0, 00:17:51.119 "format": 0, 00:17:51.119 "firmware": 0, 00:17:51.119 "ns_manage": 0 00:17:51.119 }, 00:17:51.119 "multi_ctrlr": true, 00:17:51.119 "ana_reporting": false 00:17:51.119 }, 00:17:51.119 "vs": { 00:17:51.119 "nvme_version": "1.3" 00:17:51.119 }, 00:17:51.119 "ns_data": { 00:17:51.119 "id": 1, 00:17:51.119 "can_share": true 00:17:51.119 } 00:17:51.119 } 00:17:51.119 ], 00:17:51.119 "mp_policy": "active_passive" 00:17:51.119 } 00:17:51.119 } 00:17:51.119 ] 00:17:51.119 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=624645 00:17:51.119 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:51.119 06:46:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:51.378 Running I/O for 10 seconds... 00:17:52.312 Latency(us) 00:17:52.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.312 Nvme0n1 : 1.00 14416.00 56.31 0.00 0.00 0.00 0.00 0.00 00:17:52.312 =================================================================================================================== 00:17:52.312 Total : 14416.00 56.31 0.00 0.00 0.00 0.00 0.00 00:17:52.312 00:17:53.247 06:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:17:53.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.247 Nvme0n1 : 2.00 14577.00 56.94 0.00 0.00 0.00 0.00 0.00 00:17:53.247 =================================================================================================================== 00:17:53.247 Total : 14577.00 56.94 0.00 0.00 0.00 0.00 0.00 00:17:53.247 00:17:53.504 true 00:17:53.504 06:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:17:53.504 06:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:53.761 06:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:53.761 06:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:53.761 06:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 624645 00:17:54.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.325 Nvme0n1 : 3.00 14692.00 57.39 0.00 0.00 0.00 0.00 0.00 00:17:54.325 =================================================================================================================== 00:17:54.325 Total : 14692.00 57.39 0.00 0.00 0.00 0.00 0.00 00:17:54.325 00:17:55.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.261 Nvme0n1 : 4.00 14860.75 58.05 0.00 0.00 0.00 0.00 0.00 00:17:55.261 =================================================================================================================== 00:17:55.261 Total : 14860.75 58.05 0.00 0.00 0.00 0.00 0.00 00:17:55.261 00:17:56.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.197 Nvme0n1 : 5.00 14937.20 58.35 0.00 0.00 0.00 0.00 0.00 00:17:56.197 =================================================================================================================== 00:17:56.197 Total : 14937.20 58.35 0.00 0.00 0.00 0.00 0.00 00:17:56.197 00:17:57.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.574 Nvme0n1 : 6.00 15057.33 58.82 0.00 0.00 0.00 0.00 0.00 00:17:57.574 =================================================================================================================== 00:17:57.574 Total : 15057.33 58.82 0.00 0.00 0.00 0.00 0.00 00:17:57.574 00:17:58.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.517 Nvme0n1 : 7.00 15084.29 58.92 0.00 0.00 0.00 0.00 0.00 00:17:58.517 =================================================================================================================== 00:17:58.517 Total : 15084.29 58.92 0.00 0.00 0.00 0.00 0.00 00:17:58.517 00:17:59.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.458 Nvme0n1 : 8.00 15159.50 59.22 0.00 0.00 0.00 0.00 0.00 00:17:59.458 =================================================================================================================== 00:17:59.458 Total : 15159.50 59.22 0.00 0.00 0.00 0.00 0.00 00:17:59.458 00:18:00.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.399 Nvme0n1 : 9.00 15218.00 59.45 0.00 0.00 0.00 0.00 0.00 00:18:00.399 =================================================================================================================== 00:18:00.399 Total : 15218.00 59.45 0.00 0.00 0.00 0.00 0.00 00:18:00.399 00:18:01.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.338 Nvme0n1 : 10.00 15220.20 59.45 0.00 0.00 0.00 0.00 0.00 00:18:01.338 =================================================================================================================== 00:18:01.338 Total : 15220.20 59.45 0.00 0.00 0.00 0.00 0.00 00:18:01.338 00:18:01.338 00:18:01.338 Latency(us) 00:18:01.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.338 Nvme0n1 : 10.01 15223.28 59.47 0.00 0.00 8403.22 4903.06 16602.45 00:18:01.338 =================================================================================================================== 00:18:01.338 Total : 15223.28 59.47 0.00 0.00 8403.22 4903.06 16602.45 00:18:01.338 0 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 624515 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 624515 ']' 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 624515 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 624515 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 624515' 00:18:01.338 killing process with pid 624515 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 624515 00:18:01.338 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.338 00:18:01.338 Latency(us) 00:18:01.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.338 =================================================================================================================== 00:18:01.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.338 06:46:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 624515 00:18:01.597 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.855 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:02.113 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:02.113 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:02.372 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:02.372 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:02.372 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 622027 00:18:02.372 06:46:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 622027 00:18:02.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 622027 Killed "${NVMF_APP[@]}" "$@" 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=625979 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 625979 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 625979 ']' 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:02.630 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 [2024-07-15 06:46:50.072404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:02.630 [2024-07-15 06:46:50.072504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.630 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.630 [2024-07-15 06:46:50.149044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.630 [2024-07-15 06:46:50.237603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.630 [2024-07-15 06:46:50.237667] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.630 [2024-07-15 06:46:50.237699] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.630 [2024-07-15 06:46:50.237714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.630 [2024-07-15 06:46:50.237725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.630 [2024-07-15 06:46:50.237763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.888 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.145 [2024-07-15 06:46:50.602357] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:03.145 [2024-07-15 06:46:50.602485] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:03.145 [2024-07-15 06:46:50.602530] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:03.145 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.402 06:46:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 -t 2000 00:18:03.661 [ 00:18:03.661 { 00:18:03.661 "name": "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8", 00:18:03.661 "aliases": [ 00:18:03.661 "lvs/lvol" 00:18:03.661 ], 00:18:03.661 "product_name": "Logical Volume", 00:18:03.661 "block_size": 4096, 00:18:03.661 "num_blocks": 38912, 00:18:03.661 "uuid": "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8", 00:18:03.661 "assigned_rate_limits": { 00:18:03.661 "rw_ios_per_sec": 0, 00:18:03.661 "rw_mbytes_per_sec": 0, 00:18:03.661 "r_mbytes_per_sec": 0, 00:18:03.661 "w_mbytes_per_sec": 0 00:18:03.661 }, 00:18:03.661 "claimed": false, 00:18:03.661 "zoned": false, 00:18:03.661 "supported_io_types": { 00:18:03.661 "read": true, 00:18:03.661 "write": true, 00:18:03.661 "unmap": true, 00:18:03.661 "write_zeroes": true, 00:18:03.661 "flush": false, 00:18:03.661 "reset": true, 00:18:03.661 "compare": false, 00:18:03.661 "compare_and_write": false, 00:18:03.661 "abort": false, 00:18:03.661 "nvme_admin": false, 00:18:03.661 "nvme_io": false 00:18:03.661 }, 00:18:03.661 "driver_specific": { 00:18:03.661 "lvol": { 00:18:03.661 "lvol_store_uuid": "04c451d5-aa23-44ff-88f1-49415a795db4", 00:18:03.661 "base_bdev": "aio_bdev", 00:18:03.661 "thin_provision": false, 00:18:03.661 "num_allocated_clusters": 38, 00:18:03.661 "snapshot": false, 00:18:03.661 "clone": false, 00:18:03.661 "esnap_clone": false 00:18:03.661 } 00:18:03.661 } 00:18:03.661 } 00:18:03.661 ] 00:18:03.661 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:03.661 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:03.661 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:03.920 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:03.920 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:03.920 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:04.180 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:04.180 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:04.438 [2024-07-15 06:46:51.871573] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:04.439 06:46:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:04.721 request: 00:18:04.721 { 00:18:04.721 "uuid": "04c451d5-aa23-44ff-88f1-49415a795db4", 00:18:04.721 "method": "bdev_lvol_get_lvstores", 00:18:04.721 "req_id": 1 00:18:04.721 } 00:18:04.721 Got JSON-RPC error response 00:18:04.721 response: 00:18:04.721 { 00:18:04.721 "code": -19, 00:18:04.721 "message": "No such device" 00:18:04.721 } 00:18:04.721 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:04.721 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.721 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.721 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.721 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:04.979 aio_bdev 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:04.979 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:05.238 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 -t 2000 00:18:05.497 [ 00:18:05.497 { 00:18:05.497 "name": "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8", 00:18:05.497 "aliases": [ 00:18:05.497 "lvs/lvol" 00:18:05.497 ], 00:18:05.497 "product_name": "Logical Volume", 00:18:05.497 "block_size": 4096, 00:18:05.497 "num_blocks": 38912, 00:18:05.497 "uuid": "1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8", 00:18:05.497 "assigned_rate_limits": { 00:18:05.497 "rw_ios_per_sec": 0, 00:18:05.497 "rw_mbytes_per_sec": 0, 00:18:05.497 "r_mbytes_per_sec": 0, 00:18:05.497 "w_mbytes_per_sec": 0 00:18:05.497 }, 00:18:05.497 "claimed": false, 00:18:05.497 "zoned": false, 00:18:05.497 "supported_io_types": { 00:18:05.497 "read": true, 00:18:05.497 "write": true, 00:18:05.497 "unmap": true, 00:18:05.497 "write_zeroes": true, 00:18:05.497 "flush": false, 00:18:05.497 "reset": true, 00:18:05.497 "compare": false, 00:18:05.497 "compare_and_write": false, 00:18:05.497 "abort": false, 00:18:05.497 "nvme_admin": false, 00:18:05.497 "nvme_io": false 00:18:05.497 }, 00:18:05.497 "driver_specific": { 00:18:05.497 "lvol": { 00:18:05.497 "lvol_store_uuid": "04c451d5-aa23-44ff-88f1-49415a795db4", 00:18:05.497 "base_bdev": "aio_bdev", 00:18:05.497 "thin_provision": false, 00:18:05.497 "num_allocated_clusters": 38, 00:18:05.497 "snapshot": false, 00:18:05.497 "clone": false, 00:18:05.497 "esnap_clone": false 00:18:05.497 } 00:18:05.497 } 00:18:05.497 } 00:18:05.497 ] 00:18:05.497 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:05.497 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:05.497 06:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:05.756 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:05.756 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:05.756 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:06.015 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:06.015 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a5699d6-bc99-4d20-8e9b-fc2470d7e0d8 00:18:06.273 06:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04c451d5-aa23-44ff-88f1-49415a795db4 00:18:06.530 06:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:06.788 00:18:06.788 real 0m19.209s 00:18:06.788 user 0m48.528s 00:18:06.788 sys 0m4.566s 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.788 ************************************ 00:18:06.788 END TEST lvs_grow_dirty 00:18:06.788 ************************************ 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:06.788 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:06.788 nvmf_trace.0 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.048 rmmod nvme_tcp 00:18:07.048 rmmod nvme_fabrics 00:18:07.048 rmmod nvme_keyring 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 625979 ']' 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 625979 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 625979 ']' 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 625979 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 625979 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 625979' 00:18:07.048 killing process with pid 625979 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 625979 00:18:07.048 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 625979 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.308 06:46:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.215 06:46:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.215 00:18:09.215 real 0m41.971s 00:18:09.215 user 1m11.164s 00:18:09.215 sys 0m8.379s 00:18:09.215 06:46:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:09.215 06:46:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:09.215 ************************************ 00:18:09.215 END TEST nvmf_lvs_grow 00:18:09.215 ************************************ 00:18:09.215 06:46:56 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:09.215 06:46:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:09.215 06:46:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.215 06:46:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.474 ************************************ 00:18:09.474 START TEST nvmf_bdev_io_wait 00:18:09.474 ************************************ 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:09.474 * Looking for test storage... 00:18:09.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.474 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.475 06:46:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.373 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:18:11.374 00:18:11.374 --- 10.0.0.2 ping statistics --- 00:18:11.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.374 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:18:11.374 00:18:11.374 --- 10.0.0.1 ping statistics --- 00:18:11.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.374 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.374 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.633 06:46:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=628504 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 628504 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 628504 ']' 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.633 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 [2024-07-15 06:46:59.063805] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.633 [2024-07-15 06:46:59.063904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.633 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.633 [2024-07-15 06:46:59.139076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.633 [2024-07-15 06:46:59.228441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.633 [2024-07-15 06:46:59.228496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.633 [2024-07-15 06:46:59.228509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.633 [2024-07-15 06:46:59.228519] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.633 [2024-07-15 06:46:59.228528] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.633 [2024-07-15 06:46:59.228607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.633 [2024-07-15 06:46:59.231895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.633 [2024-07-15 06:46:59.231969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.633 [2024-07-15 06:46:59.231974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.892 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 [2024-07-15 06:46:59.392574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 Malloc0 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 [2024-07-15 06:46:59.456338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=628565 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=628568 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.893 { 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme$subsystem", 00:18:11.893 "trtype": "$TEST_TRANSPORT", 00:18:11.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "$NVMF_PORT", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.893 "hdgst": ${hdgst:-false}, 00:18:11.893 "ddgst": ${ddgst:-false} 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 } 00:18:11.893 EOF 00:18:11.893 )") 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=628571 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=628573 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.893 { 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme$subsystem", 00:18:11.893 "trtype": "$TEST_TRANSPORT", 00:18:11.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "$NVMF_PORT", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.893 "hdgst": ${hdgst:-false}, 00:18:11.893 "ddgst": ${ddgst:-false} 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 } 00:18:11.893 EOF 00:18:11.893 )") 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.893 { 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme$subsystem", 00:18:11.893 "trtype": "$TEST_TRANSPORT", 00:18:11.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "$NVMF_PORT", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.893 "hdgst": ${hdgst:-false}, 00:18:11.893 "ddgst": ${ddgst:-false} 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 } 00:18:11.893 EOF 00:18:11.893 )") 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.893 { 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme$subsystem", 00:18:11.893 "trtype": "$TEST_TRANSPORT", 00:18:11.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "$NVMF_PORT", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.893 "hdgst": ${hdgst:-false}, 00:18:11.893 "ddgst": ${ddgst:-false} 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 } 00:18:11.893 EOF 00:18:11.893 )") 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 628565 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme1", 00:18:11.893 "trtype": "tcp", 00:18:11.893 "traddr": "10.0.0.2", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "4420", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.893 "hdgst": false, 00:18:11.893 "ddgst": false 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 }' 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme1", 00:18:11.893 "trtype": "tcp", 00:18:11.893 "traddr": "10.0.0.2", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "4420", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.893 "hdgst": false, 00:18:11.893 "ddgst": false 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 }' 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme1", 00:18:11.893 "trtype": "tcp", 00:18:11.893 "traddr": "10.0.0.2", 00:18:11.893 "adrfam": "ipv4", 00:18:11.893 "trsvcid": "4420", 00:18:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.893 "hdgst": false, 00:18:11.893 "ddgst": false 00:18:11.893 }, 00:18:11.893 "method": "bdev_nvme_attach_controller" 00:18:11.893 }' 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.893 06:46:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.893 "params": { 00:18:11.893 "name": "Nvme1", 00:18:11.893 "trtype": "tcp", 00:18:11.893 "traddr": "10.0.0.2", 00:18:11.893 "adrfam": "ipv4", 00:18:11.894 "trsvcid": "4420", 00:18:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.894 "hdgst": false, 00:18:11.894 "ddgst": false 00:18:11.894 }, 00:18:11.894 "method": "bdev_nvme_attach_controller" 00:18:11.894 }' 00:18:11.894 [2024-07-15 06:46:59.503175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.894 [2024-07-15 06:46:59.503209] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.894 [2024-07-15 06:46:59.503267] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:11.894 [2024-07-15 06:46:59.503260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.894 [2024-07-15 06:46:59.503260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.894 [2024-07-15 06:46:59.503287] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:11.894 [2024-07-15 06:46:59.503337] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 06:46:59.503337] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:11.894 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:12.152 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.152 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.152 [2024-07-15 06:46:59.675435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.152 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.152 [2024-07-15 06:46:59.749899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:12.410 [2024-07-15 06:46:59.775470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.410 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.410 [2024-07-15 06:46:59.853871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.410 [2024-07-15 06:46:59.854947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:12.410 [2024-07-15 06:46:59.919605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:12.410 [2024-07-15 06:46:59.927640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.410 [2024-07-15 06:46:59.997109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:12.667 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:13.603 00:18:13.603 Latency(us) 00:18:13.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.603 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:13.603 Nvme1n1 : 1.01 11908.54 46.52 0.00 0.00 10709.55 6213.78 21456.97 00:18:13.603 =================================================================================================================== 00:18:13.603 Total : 11908.54 46.52 0.00 0.00 10709.55 6213.78 21456.97 00:18:13.603 00:18:13.603 Latency(us) 00:18:13.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.603 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:13.603 Nvme1n1 : 1.02 4403.32 17.20 0.00 0.00 28651.39 13592.65 43690.67 00:18:13.603 =================================================================================================================== 00:18:13.603 Total : 4403.32 17.20 0.00 0.00 28651.39 13592.65 43690.67 00:18:13.891 00:18:13.891 Latency(us) 00:18:13.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.891 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:13.891 Nvme1n1 : 1.00 200018.79 781.32 0.00 0.00 637.65 267.00 867.75 00:18:13.891 =================================================================================================================== 00:18:13.891 Total : 200018.79 781.32 0.00 0.00 637.65 267.00 867.75 00:18:13.891 00:18:13.891 Latency(us) 00:18:13.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.891 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:13.891 Nvme1n1 : 1.01 4851.71 18.95 0.00 0.00 26282.95 6456.51 59419.31 00:18:13.891 =================================================================================================================== 00:18:13.891 Total : 4851.71 18.95 0.00 0.00 26282.95 6456.51 59419.31 00:18:13.891 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 628568 00:18:13.891 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 628571 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 628573 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.153 rmmod nvme_tcp 00:18:14.153 rmmod nvme_fabrics 00:18:14.153 rmmod nvme_keyring 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 628504 ']' 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 628504 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 628504 ']' 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 628504 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 628504 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 628504' 00:18:14.153 killing process with pid 628504 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 628504 00:18:14.153 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 628504 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.411 06:47:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.316 06:47:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.316 00:18:16.316 real 0m7.026s 00:18:16.316 user 0m16.229s 00:18:16.316 sys 0m3.368s 00:18:16.316 06:47:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:16.316 06:47:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.316 ************************************ 00:18:16.316 END TEST nvmf_bdev_io_wait 00:18:16.316 ************************************ 00:18:16.316 06:47:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:16.316 06:47:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:16.316 06:47:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:16.316 06:47:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.316 ************************************ 00:18:16.316 START TEST nvmf_queue_depth 00:18:16.316 ************************************ 00:18:16.316 06:47:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:16.576 * Looking for test storage... 00:18:16.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:16.576 06:47:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.576 06:47:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.577 06:47:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.577 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.577 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.577 06:47:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.577 06:47:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.477 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.478 06:47:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:18.478 00:18:18.478 --- 10.0.0.2 ping statistics --- 00:18:18.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.478 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:18.478 00:18:18.478 --- 10.0.0.1 ping statistics --- 00:18:18.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.478 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=630751 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 630751 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 630751 ']' 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.478 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.737 [2024-07-15 06:47:06.100549] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:18.737 [2024-07-15 06:47:06.100630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.737 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.737 [2024-07-15 06:47:06.164405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.737 [2024-07-15 06:47:06.251659] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.737 [2024-07-15 06:47:06.251714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.737 [2024-07-15 06:47:06.251728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.737 [2024-07-15 06:47:06.251738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.737 [2024-07-15 06:47:06.251747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.737 [2024-07-15 06:47:06.251773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 [2024-07-15 06:47:06.385904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 Malloc0 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 [2024-07-15 06:47:06.451714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=630890 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 630890 /var/tmp/bdevperf.sock 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 630890 ']' 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.996 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 [2024-07-15 06:47:06.493288] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:18.996 [2024-07-15 06:47:06.493350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630890 ] 00:18:18.996 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.996 [2024-07-15 06:47:06.555061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.254 [2024-07-15 06:47:06.645624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:19.254 NVMe0n1 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.254 06:47:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.513 Running I/O for 10 seconds... 00:18:29.498 00:18:29.498 Latency(us) 00:18:29.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:29.498 Verification LBA range: start 0x0 length 0x4000 00:18:29.498 NVMe0n1 : 10.09 8600.87 33.60 0.00 0.00 118526.79 24855.13 74953.77 00:18:29.498 =================================================================================================================== 00:18:29.498 Total : 8600.87 33.60 0.00 0.00 118526.79 24855.13 74953.77 00:18:29.498 0 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 630890 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 630890 ']' 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 630890 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 630890 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 630890' 00:18:29.498 killing process with pid 630890 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 630890 00:18:29.498 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.498 00:18:29.498 Latency(us) 00:18:29.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.498 =================================================================================================================== 00:18:29.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.498 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 630890 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.756 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.756 rmmod nvme_tcp 00:18:29.756 rmmod nvme_fabrics 00:18:29.756 rmmod nvme_keyring 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 630751 ']' 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 630751 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 630751 ']' 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 630751 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 630751 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 630751' 00:18:30.014 killing process with pid 630751 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 630751 00:18:30.014 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 630751 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.274 06:47:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.179 06:47:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.179 00:18:32.179 real 0m15.789s 00:18:32.179 user 0m22.294s 00:18:32.179 sys 0m2.909s 00:18:32.179 06:47:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:32.179 06:47:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:32.179 ************************************ 00:18:32.179 END TEST nvmf_queue_depth 00:18:32.179 ************************************ 00:18:32.179 06:47:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:32.179 06:47:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:32.179 06:47:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:32.179 06:47:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:32.179 ************************************ 00:18:32.179 START TEST nvmf_target_multipath 00:18:32.179 ************************************ 00:18:32.179 06:47:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:32.437 * Looking for test storage... 00:18:32.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.437 06:47:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:34.347 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:34.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:34.347 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.347 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:34.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:34.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:18:34.348 00:18:34.348 --- 10.0.0.2 ping statistics --- 00:18:34.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.348 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:18:34.348 00:18:34.348 --- 10.0.0.1 ping statistics --- 00:18:34.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.348 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:34.348 only one NIC for nvmf test 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.348 rmmod nvme_tcp 00:18:34.348 rmmod nvme_fabrics 00:18:34.348 rmmod nvme_keyring 00:18:34.348 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.608 06:47:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.511 06:47:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.511 06:47:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:36.511 06:47:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:36.511 06:47:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.511 06:47:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.511 00:18:36.511 real 0m4.258s 00:18:36.511 user 0m0.795s 00:18:36.511 sys 0m1.461s 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:36.511 06:47:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:36.511 ************************************ 00:18:36.511 END TEST nvmf_target_multipath 00:18:36.511 ************************************ 00:18:36.512 06:47:24 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:36.512 06:47:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:36.512 06:47:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:36.512 06:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.512 ************************************ 00:18:36.512 START TEST nvmf_zcopy 00:18:36.512 ************************************ 00:18:36.512 06:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:36.512 * Looking for test storage... 00:18:36.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.512 06:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.512 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.772 06:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.706 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:38.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:38.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:38.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:38.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:38.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:18:38.707 00:18:38.707 --- 10.0.0.2 ping statistics --- 00:18:38.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.707 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:38.707 00:18:38.707 --- 10.0.0.1 ping statistics --- 00:18:38.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.707 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=635934 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 635934 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 635934 ']' 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:38.707 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.707 [2024-07-15 06:47:26.303919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:38.707 [2024-07-15 06:47:26.304012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.965 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.965 [2024-07-15 06:47:26.369557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.965 [2024-07-15 06:47:26.452417] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.965 [2024-07-15 06:47:26.452486] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.965 [2024-07-15 06:47:26.452515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.965 [2024-07-15 06:47:26.452526] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.965 [2024-07-15 06:47:26.452535] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.965 [2024-07-15 06:47:26.452561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:38.965 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.966 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.223 [2024-07-15 06:47:26.585000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.223 [2024-07-15 06:47:26.601209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.223 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.224 malloc0 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:39.224 { 00:18:39.224 "params": { 00:18:39.224 "name": "Nvme$subsystem", 00:18:39.224 "trtype": "$TEST_TRANSPORT", 00:18:39.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.224 "adrfam": "ipv4", 00:18:39.224 "trsvcid": "$NVMF_PORT", 00:18:39.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.224 "hdgst": ${hdgst:-false}, 00:18:39.224 "ddgst": ${ddgst:-false} 00:18:39.224 }, 00:18:39.224 "method": "bdev_nvme_attach_controller" 00:18:39.224 } 00:18:39.224 EOF 00:18:39.224 )") 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:39.224 06:47:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:39.224 "params": { 00:18:39.224 "name": "Nvme1", 00:18:39.224 "trtype": "tcp", 00:18:39.224 "traddr": "10.0.0.2", 00:18:39.224 "adrfam": "ipv4", 00:18:39.224 "trsvcid": "4420", 00:18:39.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.224 "hdgst": false, 00:18:39.224 "ddgst": false 00:18:39.224 }, 00:18:39.224 "method": "bdev_nvme_attach_controller" 00:18:39.224 }' 00:18:39.224 [2024-07-15 06:47:26.675668] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:39.224 [2024-07-15 06:47:26.675743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635965 ] 00:18:39.224 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.224 [2024-07-15 06:47:26.737788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.224 [2024-07-15 06:47:26.832463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.793 Running I/O for 10 seconds... 00:18:49.776 00:18:49.776 Latency(us) 00:18:49.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:49.776 Verification LBA range: start 0x0 length 0x1000 00:18:49.776 Nvme1n1 : 10.02 5733.09 44.79 0.00 0.00 22250.89 1498.83 33399.09 00:18:49.776 =================================================================================================================== 00:18:49.776 Total : 5733.09 44.79 0.00 0.00 22250.89 1498.83 33399.09 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=637263 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:50.035 { 00:18:50.035 "params": { 00:18:50.035 "name": "Nvme$subsystem", 00:18:50.035 "trtype": "$TEST_TRANSPORT", 00:18:50.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:50.035 "adrfam": "ipv4", 00:18:50.035 "trsvcid": "$NVMF_PORT", 00:18:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:50.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:50.035 "hdgst": ${hdgst:-false}, 00:18:50.035 "ddgst": ${ddgst:-false} 00:18:50.035 }, 00:18:50.035 "method": "bdev_nvme_attach_controller" 00:18:50.035 } 00:18:50.035 EOF 00:18:50.035 )") 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:50.035 [2024-07-15 06:47:37.408127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.035 [2024-07-15 06:47:37.408173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:50.035 06:47:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:50.035 "params": { 00:18:50.035 "name": "Nvme1", 00:18:50.036 "trtype": "tcp", 00:18:50.036 "traddr": "10.0.0.2", 00:18:50.036 "adrfam": "ipv4", 00:18:50.036 "trsvcid": "4420", 00:18:50.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.036 "hdgst": false, 00:18:50.036 "ddgst": false 00:18:50.036 }, 00:18:50.036 "method": "bdev_nvme_attach_controller" 00:18:50.036 }' 00:18:50.036 [2024-07-15 06:47:37.416095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.416122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.424112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.424136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.432126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.432150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.440152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.440188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.446329] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:50.036 [2024-07-15 06:47:37.446401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637263 ] 00:18:50.036 [2024-07-15 06:47:37.448194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.448216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.456210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.456230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.464228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.464248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.472251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.472270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.036 [2024-07-15 06:47:37.480270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.480289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.488304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.488337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.496319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.496345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.504341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.504365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.512361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.512385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.513274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.036 [2024-07-15 06:47:37.520401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.520432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.528433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.528472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.536427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.536452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.544449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.544473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.552469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.552494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.560492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.560517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.568537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.568572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.576553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.576584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.584558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.584583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.592579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.592603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.600602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.600627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.608609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.608629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.608834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.036 [2024-07-15 06:47:37.616646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.616671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.624681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.624713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.632710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.632761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.640734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.640769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.036 [2024-07-15 06:47:37.648764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.036 [2024-07-15 06:47:37.648803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.656804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.656846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.664813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.664854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.672840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.672894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.680834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.680861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.688895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.688948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.696915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.696979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.704953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.705002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.712915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.712937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.720949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.720972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.728980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.729002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.737010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.737036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.745028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.745052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.753046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.753070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.761072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.761111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.769092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.769116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.777141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.777166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.785133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.785182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.793174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.296 [2024-07-15 06:47:37.793199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.296 [2024-07-15 06:47:37.801207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.801229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 Running I/O for 5 seconds... 00:18:50.297 [2024-07-15 06:47:37.809202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.809227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.822888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.822917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.833069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.833097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.843818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.843846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.855949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.855977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.865073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.865101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.877826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.877854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.887843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.887870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.297 [2024-07-15 06:47:37.898468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.297 [2024-07-15 06:47:37.898495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.911127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.911155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.920650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.920677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.931687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.931715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.945015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.945043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.956508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.956536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.965061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.965088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.977868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.977903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.988086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.988114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:37.998127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:37.998154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.012218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.012246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.021840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.021867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.032790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.032818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.042956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.042984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.053252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.053279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.063539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.063566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.556 [2024-07-15 06:47:38.073908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.556 [2024-07-15 06:47:38.073943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.084077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.084104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.094468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.094495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.105077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.105105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.115513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.115541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.126007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.126035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.136198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.136226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.146599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.146627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.157397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.157425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.557 [2024-07-15 06:47:38.167705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.557 [2024-07-15 06:47:38.167736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.178117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.178155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.188825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.188853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.199382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.199409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.209859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.209895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.220634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.220661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.233094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.233122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.242830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.242858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.253752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.253780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.264033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.264061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.274451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.274479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.284950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.284977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.295426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.295455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.306033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.306060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.316201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.316228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.326579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.326607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.337333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.337364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.349257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.349288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.361304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.361335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.372951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.372979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.384200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.384230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.395603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.395633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.406993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.407021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.418028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.418056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.817 [2024-07-15 06:47:38.429383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.817 [2024-07-15 06:47:38.429413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.441421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.441454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.452955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.452983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.464979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.465007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.476258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.476289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.488043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.488070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.499510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.499540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.510971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.510999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.522555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.522585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.533839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.533870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.545237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.545268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.556827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.556858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.568534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.568564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.580281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.083 [2024-07-15 06:47:38.580311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.083 [2024-07-15 06:47:38.593557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.593588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.604630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.604669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.616036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.616064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.626817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.626847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.638026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.638054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.649962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.649989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.661506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.661537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.673480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.673511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.684707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.684737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.084 [2024-07-15 06:47:38.696645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.084 [2024-07-15 06:47:38.696677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.708442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.708473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.720044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.720072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.731757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.731787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.745210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.745241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.755925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.755968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.767448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.767479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.778792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.778822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.790025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.790053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.801094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.801121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.812442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.812472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.823638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.823676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.835064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.835092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.846606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.846636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.857623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.857653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.868508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.868538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.880129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.880172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.891506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.891536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.904920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.904947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.915276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.915306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.927083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.927110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.938680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.938711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.343 [2024-07-15 06:47:38.950587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.343 [2024-07-15 06:47:38.950618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:38.963463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:38.963493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:38.974158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:38.974185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:38.985743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:38.985773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:38.997041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:38.997068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.008296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.008326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.019838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.019868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.031570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.031600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.043365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.043404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.056461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.056492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.067008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.067035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.078612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.078644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.090133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.090161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.101170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.101201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.112456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.112486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.123631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.123661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.135212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.135242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.146836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.146866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.158493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.158523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.170593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.170623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.181740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.181770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.192962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.192989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.601 [2024-07-15 06:47:39.204755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.601 [2024-07-15 06:47:39.204787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.859 [2024-07-15 06:47:39.216291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.859 [2024-07-15 06:47:39.216322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.859 [2024-07-15 06:47:39.227705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.859 [2024-07-15 06:47:39.227736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.859 [2024-07-15 06:47:39.238899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.859 [2024-07-15 06:47:39.238945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.251129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.251157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.263063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.263097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.275688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.275719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.286027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.286055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.298104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.298131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.309652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.309682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.321928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.321956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.333977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.334012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.347812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.347844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.357952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.357980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.368150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.368178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.378993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.379021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.389601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.389629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.402672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.402700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.412520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.412548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.423263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.423290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.434167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.434195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.444720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.444748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.457941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.457970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.860 [2024-07-15 06:47:39.467852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.860 [2024-07-15 06:47:39.467890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.478476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.478514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.490966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.490994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.502728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.502755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.512103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.512131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.523250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.523277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.533992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.534019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.544434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.118 [2024-07-15 06:47:39.544461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.118 [2024-07-15 06:47:39.556415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.556442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.565830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.565857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.577216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.577244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.587963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.587991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.598360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.598387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.608199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.608226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.618585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.618613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.629018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.629046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.639463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.639491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.649537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.649564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.660154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.660181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.672747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.672774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.684360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.684387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.692833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.692861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.705741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.705768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.716583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.716612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.119 [2024-07-15 06:47:39.726971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.119 [2024-07-15 06:47:39.726999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.737547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.737576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.747840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.747867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.758204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.758232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.768600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.768628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.778891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.778919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.791369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.791396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.801536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.801563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.811934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.811961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.822340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.822367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.832710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.832737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.843107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.843134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.855941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.855968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.867554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.867581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.876203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.876231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.887383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.887410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.899688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.899715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.908952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.908980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.921512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.921540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.931367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.931394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.941303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.941330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.951486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.951513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.961768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.961796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.972391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.972419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.378 [2024-07-15 06:47:39.982605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.378 [2024-07-15 06:47:39.982633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:39.992791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:39.992820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.003999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.004028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.015692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.015729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.027445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.027476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.038634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.038665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.052309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.052340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.062993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.063021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.074096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.074124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.085573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.085604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.097039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.097066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.108190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.108234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.121429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.121460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.132645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.132675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.144440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.144471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.155857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.155898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.638 [2024-07-15 06:47:40.167516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.638 [2024-07-15 06:47:40.167547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.179171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.179215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.190833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.190863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.202060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.202088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.214998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.215025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.225165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.225192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.237312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.237343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.639 [2024-07-15 06:47:40.248839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.639 [2024-07-15 06:47:40.248870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.897 [2024-07-15 06:47:40.260734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.897 [2024-07-15 06:47:40.260765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.897 [2024-07-15 06:47:40.272319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.897 [2024-07-15 06:47:40.272349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.897 [2024-07-15 06:47:40.283304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.897 [2024-07-15 06:47:40.283335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.294239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.294270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.305408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.305447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.318242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.318273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.328784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.328814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.340019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.340046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.353093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.353121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.363659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.363690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.374985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.375013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.386504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.386535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.398113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.398141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.411529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.411561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.422767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.422798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.433958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.433986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.444959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.444987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.456291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.456323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.469169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.469214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.480587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.480614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.491730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.491758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.898 [2024-07-15 06:47:40.503591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.898 [2024-07-15 06:47:40.503624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.157 [2024-07-15 06:47:40.515069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.157 [2024-07-15 06:47:40.515098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.157 [2024-07-15 06:47:40.526518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.157 [2024-07-15 06:47:40.526568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.157 [2024-07-15 06:47:40.539866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.157 [2024-07-15 06:47:40.539933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.157 [2024-07-15 06:47:40.549796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.157 [2024-07-15 06:47:40.549827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.157 [2024-07-15 06:47:40.561842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.157 [2024-07-15 06:47:40.561873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.573002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.573030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.584125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.584164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.595122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.595173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.606423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.606454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.617819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.617850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.628977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.629006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.640240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.640271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.653600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.653631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.664319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.664349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.676392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.676422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.687902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.687945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.699497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.699528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.710623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.710653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.721481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.721512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.733289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.733319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.744508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.744547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.757313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.757344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.158 [2024-07-15 06:47:40.767771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.158 [2024-07-15 06:47:40.767801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.779728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.779759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.791030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.791058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.802365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.802395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.815801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.815831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.826162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.826189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.838017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.838045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.849473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.849504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.860865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.860903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.872161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.872205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.883680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.883710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.894725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.894756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.908137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.908165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.919090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.919118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.930581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.930611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.941407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.941437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.952714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.952744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.965866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.965935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.975099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.975126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.986909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.986954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:40.997951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:40.997979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:41.009091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:41.009119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:41.021360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:41.021388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.418 [2024-07-15 06:47:41.030619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.418 [2024-07-15 06:47:41.030647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.041527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.041554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.053683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.053711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.062843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.062870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.073870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.073907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.084132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.084160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.094355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.094383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.104765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.104792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.115137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.115165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.125535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.125562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.136032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.136060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.146432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.146459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.158430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.158457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.167953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.167991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.178511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.178538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.190999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.191027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.200303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.200330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.211163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.211191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.221895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.221922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.232217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.678 [2024-07-15 06:47:41.232244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.678 [2024-07-15 06:47:41.244242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.679 [2024-07-15 06:47:41.244270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.679 [2024-07-15 06:47:41.252978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.679 [2024-07-15 06:47:41.253005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.679 [2024-07-15 06:47:41.263420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.679 [2024-07-15 06:47:41.263448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.679 [2024-07-15 06:47:41.273557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.679 [2024-07-15 06:47:41.273585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.679 [2024-07-15 06:47:41.283520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.679 [2024-07-15 06:47:41.283548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.293896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.293924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.303916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.303944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.314017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.314044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.324279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.324307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.334476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.334504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.345015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.345042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.355303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.355330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.365539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.365566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.375834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.375884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.386287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.386315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.397112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.397140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.408935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.408962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.418170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.418197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.429118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.429145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.441866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.441901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.451633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.451660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.461818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.461844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.472232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.472259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.482712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.482739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.493771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.493801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.506211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.506239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.515530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.515557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.526560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.526588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.537011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.537039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.937 [2024-07-15 06:47:41.547716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.937 [2024-07-15 06:47:41.547743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.557916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.557944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.568713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.568742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.579721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.579749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.592072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.592100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.601787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.601815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.612319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.612347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.622782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.622810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.196 [2024-07-15 06:47:41.633285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.196 [2024-07-15 06:47:41.633313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.644474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.644503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.657271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.657299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.667117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.667145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.678380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.678412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.689871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.689927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.701815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.701846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.713141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.713185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.724106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.724134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.735977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.736005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.748643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.748675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.760646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.760677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.771770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.771801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.783081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.783109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.794494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.794524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.197 [2024-07-15 06:47:41.805712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.197 [2024-07-15 06:47:41.805743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.817035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.817063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.828268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.828298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.839347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.839378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.850756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.850786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.862149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.862191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.875754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.875784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.886717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.886747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.898231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.898261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.909282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.909312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.920857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.920898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.932480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.932509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.943867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.943920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.955298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.955328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.966789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.966819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.978152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.978179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:41.989782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:41.989821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.001535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.001566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.012490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.012521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.023869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.023907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.035367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.035398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.047190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.047221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.059135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.059163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.457 [2024-07-15 06:47:42.070644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.457 [2024-07-15 06:47:42.070675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.082002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.082030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.093674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.093704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.105165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.105210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.118362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.118393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.128870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.128909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.140748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.140778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.152026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.152054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.163323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.163354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.717 [2024-07-15 06:47:42.174662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.717 [2024-07-15 06:47:42.174693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.186134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.186177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.197470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.197500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.208831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.208871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.220274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.220305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.231542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.231572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.242966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.242993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.255020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.255048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.266189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.266219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.277426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.277456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.288424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.288454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.303584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.303617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.314220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.314250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.718 [2024-07-15 06:47:42.325210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.718 [2024-07-15 06:47:42.325241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.337003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.337031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.348783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.348813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.360257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.360288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.371395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.371425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.382316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.382347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.393361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.393391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.404531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.404561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.415979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.416006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.427018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.978 [2024-07-15 06:47:42.427056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.978 [2024-07-15 06:47:42.439751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.439781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.450251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.450282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.462215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.462246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.473414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.473445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.485101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.485129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.496487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.496518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.508037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.508066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.519220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.519247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.530705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.530736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.541867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.541922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.553111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.553139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.564479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.564509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.577378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.577409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.979 [2024-07-15 06:47:42.587936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.979 [2024-07-15 06:47:42.587963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.599332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.599363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.610668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.610698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.621890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.621934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.634928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.634956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.645836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.645883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.657130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.657158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.668431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.668461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.679997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.680025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.691116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.691144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.703259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.703287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.712469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.712497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.723758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.723786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.734626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.734654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.745070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.745097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.757802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.757830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.769505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.769534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.778408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.778437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.789833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.789862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.801598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.801627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.811185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.811213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.821165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.821192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 00:18:55.238 Latency(us) 00:18:55.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.238 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:55.238 Nvme1n1 : 5.01 11532.92 90.10 0.00 0.00 11084.42 4538.97 25631.86 00:18:55.238 =================================================================================================================== 00:18:55.238 Total : 11532.92 90.10 0.00 0.00 11084.42 4538.97 25631.86 00:18:55.238 [2024-07-15 06:47:42.826468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.826493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.834472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.834495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.842520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.238 [2024-07-15 06:47:42.842555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.238 [2024-07-15 06:47:42.850565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.239 [2024-07-15 06:47:42.850622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.858589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.858632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.866610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.866656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.874629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.874670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.882659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.882706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.890683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.890724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.898694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.898742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.906722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.906767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.914751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.914801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.922768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.922815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.930779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.930821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.938800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.938845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.946828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.946869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.954834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.954899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.962825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.962848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.970883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.970922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.978915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.978956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.986948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.986996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:42.994964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:42.994995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:43.002959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:43.002989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:43.011008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:43.011052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.498 [2024-07-15 06:47:43.019033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.498 [2024-07-15 06:47:43.019083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.499 [2024-07-15 06:47:43.027026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.499 [2024-07-15 06:47:43.027054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.499 [2024-07-15 06:47:43.035037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.499 [2024-07-15 06:47:43.035060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.499 [2024-07-15 06:47:43.043058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.499 [2024-07-15 06:47:43.043081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (637263) - No such process 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 637263 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 delay0 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.499 06:47:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:55.499 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.757 [2024-07-15 06:47:43.162885] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:02.403 Initializing NVMe Controllers 00:19:02.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:02.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:02.403 Initialization complete. Launching workers. 00:19:02.403 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 176 00:19:02.403 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 463, failed to submit 33 00:19:02.403 success 262, unsuccess 201, failed 0 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.403 rmmod nvme_tcp 00:19:02.403 rmmod nvme_fabrics 00:19:02.403 rmmod nvme_keyring 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 635934 ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 635934 ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 635934' 00:19:02.403 killing process with pid 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 635934 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.403 06:47:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.311 06:47:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.311 00:19:04.311 real 0m27.561s 00:19:04.311 user 0m40.896s 00:19:04.311 sys 0m8.090s 00:19:04.311 06:47:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:04.311 06:47:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:04.311 ************************************ 00:19:04.311 END TEST nvmf_zcopy 00:19:04.311 ************************************ 00:19:04.311 06:47:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:04.311 06:47:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:04.311 06:47:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:04.311 06:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.311 ************************************ 00:19:04.311 START TEST nvmf_nmic 00:19:04.311 ************************************ 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:04.311 * Looking for test storage... 00:19:04.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.311 06:47:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.312 06:47:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.218 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.218 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:19:06.476 00:19:06.476 --- 10.0.0.2 ping statistics --- 00:19:06.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.476 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:19:06.476 00:19:06.476 --- 10.0.0.1 ping statistics --- 00:19:06.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.476 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=640523 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 640523 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 640523 ']' 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.476 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.477 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.477 06:47:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.477 [2024-07-15 06:47:53.948719] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:06.477 [2024-07-15 06:47:53.948802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.477 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.477 [2024-07-15 06:47:54.019862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.734 [2024-07-15 06:47:54.115254] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.734 [2024-07-15 06:47:54.115315] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.734 [2024-07-15 06:47:54.115332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.734 [2024-07-15 06:47:54.115345] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.734 [2024-07-15 06:47:54.115357] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.734 [2024-07-15 06:47:54.115442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.734 [2024-07-15 06:47:54.115495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.734 [2024-07-15 06:47:54.115545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.734 [2024-07-15 06:47:54.115548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 [2024-07-15 06:47:54.271713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 Malloc0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 [2024-07-15 06:47:54.323456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:06.734 test case1: single bdev can't be used in multiple subsystems 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 [2024-07-15 06:47:54.347302] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:06.734 [2024-07-15 06:47:54.347331] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:06.734 [2024-07-15 06:47:54.347361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.992 request: 00:19:06.992 { 00:19:06.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:06.992 "namespace": { 00:19:06.992 "bdev_name": "Malloc0", 00:19:06.992 "no_auto_visible": false 00:19:06.992 }, 00:19:06.992 "method": "nvmf_subsystem_add_ns", 00:19:06.992 "req_id": 1 00:19:06.992 } 00:19:06.992 Got JSON-RPC error response 00:19:06.992 response: 00:19:06.992 { 00:19:06.992 "code": -32602, 00:19:06.992 "message": "Invalid parameters" 00:19:06.992 } 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:06.992 Adding namespace failed - expected result. 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:06.992 test case2: host connect to nvmf target in multiple paths 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:06.992 [2024-07-15 06:47:54.355408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.992 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.561 06:47:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:08.128 06:47:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:08.128 06:47:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:08.128 06:47:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.128 06:47:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:08.128 06:47:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:10.022 06:47:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:10.022 [global] 00:19:10.022 thread=1 00:19:10.022 invalidate=1 00:19:10.022 rw=write 00:19:10.022 time_based=1 00:19:10.022 runtime=1 00:19:10.022 ioengine=libaio 00:19:10.022 direct=1 00:19:10.022 bs=4096 00:19:10.022 iodepth=1 00:19:10.022 norandommap=0 00:19:10.022 numjobs=1 00:19:10.022 00:19:10.022 verify_dump=1 00:19:10.022 verify_backlog=512 00:19:10.022 verify_state_save=0 00:19:10.022 do_verify=1 00:19:10.022 verify=crc32c-intel 00:19:10.022 [job0] 00:19:10.022 filename=/dev/nvme0n1 00:19:10.022 Could not set queue depth (nvme0n1) 00:19:10.278 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.278 fio-3.35 00:19:10.278 Starting 1 thread 00:19:11.650 00:19:11.651 job0: (groupid=0, jobs=1): err= 0: pid=641160: Mon Jul 15 06:47:58 2024 00:19:11.651 read: IOPS=26, BW=104KiB/s (107kB/s)(108KiB/1035msec) 00:19:11.651 slat (nsec): min=6625, max=48846, avg=24549.89, stdev=11682.05 00:19:11.651 clat (usec): min=299, max=42023, avg=33801.13, stdev=16264.98 00:19:11.651 lat (usec): min=309, max=42039, avg=33825.68, stdev=16271.80 00:19:11.651 clat percentiles (usec): 00:19:11.651 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[40633], 00:19:11.651 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:11.651 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:11.651 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:11.651 | 99.99th=[42206] 00:19:11.651 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:11.651 slat (usec): min=5, max=29154, avg=63.74, stdev=1288.17 00:19:11.651 clat (usec): min=149, max=283, avg=171.34, stdev=12.81 00:19:11.651 lat (usec): min=156, max=29370, avg=235.08, stdev=1290.20 00:19:11.651 clat percentiles (usec): 00:19:11.651 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:19:11.651 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:19:11.651 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:19:11.651 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 285], 99.95th=[ 285], 00:19:11.651 | 99.99th=[ 285] 00:19:11.651 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:11.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:11.651 lat (usec) : 250=94.81%, 500=1.11% 00:19:11.651 lat (msec) : 50=4.08% 00:19:11.651 cpu : usr=0.00%, sys=0.48%, ctx=543, majf=0, minf=2 00:19:11.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.651 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.651 00:19:11.651 Run status group 0 (all jobs): 00:19:11.651 READ: bw=104KiB/s (107kB/s), 104KiB/s-104KiB/s (107kB/s-107kB/s), io=108KiB (111kB), run=1035-1035msec 00:19:11.651 WRITE: bw=1979KiB/s (2026kB/s), 1979KiB/s-1979KiB/s (2026kB/s-2026kB/s), io=2048KiB (2097kB), run=1035-1035msec 00:19:11.651 00:19:11.651 Disk stats (read/write): 00:19:11.651 nvme0n1: ios=75/512, merge=0/0, ticks=1732/86, in_queue=1818, util=98.80% 00:19:11.651 06:47:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.651 rmmod nvme_tcp 00:19:11.651 rmmod nvme_fabrics 00:19:11.651 rmmod nvme_keyring 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 640523 ']' 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 640523 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 640523 ']' 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 640523 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 640523 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 640523' 00:19:11.651 killing process with pid 640523 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 640523 00:19:11.651 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 640523 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.908 06:47:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.909 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.909 06:47:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.448 06:48:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.448 00:19:14.448 real 0m9.794s 00:19:14.448 user 0m22.079s 00:19:14.448 sys 0m2.284s 00:19:14.448 06:48:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:14.448 06:48:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:14.448 ************************************ 00:19:14.448 END TEST nvmf_nmic 00:19:14.448 ************************************ 00:19:14.448 06:48:01 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:14.448 06:48:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.448 06:48:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.448 06:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.448 ************************************ 00:19:14.448 START TEST nvmf_fio_target 00:19:14.448 ************************************ 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:14.448 * Looking for test storage... 00:19:14.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:14.448 06:48:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.449 06:48:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:16.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.358 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:16.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:16.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:16.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:19:16.359 00:19:16.359 --- 10.0.0.2 ping statistics --- 00:19:16.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.359 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:19:16.359 00:19:16.359 --- 10.0.0.1 ping statistics --- 00:19:16.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.359 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=643341 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 643341 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 643341 ']' 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.359 06:48:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.359 [2024-07-15 06:48:03.719609] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:16.359 [2024-07-15 06:48:03.719707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.359 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.359 [2024-07-15 06:48:03.791508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.359 [2024-07-15 06:48:03.888056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.359 [2024-07-15 06:48:03.888117] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.359 [2024-07-15 06:48:03.888133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.359 [2024-07-15 06:48:03.888147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.359 [2024-07-15 06:48:03.888158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.359 [2024-07-15 06:48:03.888226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.359 [2024-07-15 06:48:03.888259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.359 [2024-07-15 06:48:03.888378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.359 [2024-07-15 06:48:03.888382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.617 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:16.874 [2024-07-15 06:48:04.264404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.874 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.131 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:17.131 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.389 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:17.389 06:48:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.646 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:17.646 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.903 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:17.903 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:18.161 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.420 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:18.420 06:48:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.677 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:18.677 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.935 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:18.935 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:19.192 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:19.450 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:19.450 06:48:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.707 06:48:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:19.707 06:48:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:19.981 06:48:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.299 [2024-07-15 06:48:07.692785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.299 06:48:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:20.556 06:48:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:20.813 06:48:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:21.378 06:48:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:23.282 06:48:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:23.282 [global] 00:19:23.282 thread=1 00:19:23.282 invalidate=1 00:19:23.282 rw=write 00:19:23.282 time_based=1 00:19:23.282 runtime=1 00:19:23.282 ioengine=libaio 00:19:23.282 direct=1 00:19:23.282 bs=4096 00:19:23.282 iodepth=1 00:19:23.282 norandommap=0 00:19:23.282 numjobs=1 00:19:23.282 00:19:23.282 verify_dump=1 00:19:23.282 verify_backlog=512 00:19:23.282 verify_state_save=0 00:19:23.282 do_verify=1 00:19:23.282 verify=crc32c-intel 00:19:23.282 [job0] 00:19:23.282 filename=/dev/nvme0n1 00:19:23.282 [job1] 00:19:23.282 filename=/dev/nvme0n2 00:19:23.282 [job2] 00:19:23.282 filename=/dev/nvme0n3 00:19:23.282 [job3] 00:19:23.282 filename=/dev/nvme0n4 00:19:23.282 Could not set queue depth (nvme0n1) 00:19:23.542 Could not set queue depth (nvme0n2) 00:19:23.542 Could not set queue depth (nvme0n3) 00:19:23.542 Could not set queue depth (nvme0n4) 00:19:23.542 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.542 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.542 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.542 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.542 fio-3.35 00:19:23.542 Starting 4 threads 00:19:24.919 00:19:24.919 job0: (groupid=0, jobs=1): err= 0: pid=644808: Mon Jul 15 06:48:12 2024 00:19:24.919 read: IOPS=1015, BW=4063KiB/s (4161kB/s)(4116KiB/1013msec) 00:19:24.919 slat (nsec): min=4723, max=66939, avg=24865.03, stdev=10822.37 00:19:24.919 clat (usec): min=272, max=41031, avg=619.25, stdev=2816.33 00:19:24.919 lat (usec): min=284, max=41044, avg=644.11, stdev=2815.57 00:19:24.919 clat percentiles (usec): 00:19:24.919 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 355], 00:19:24.919 | 30.00th=[ 383], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 457], 00:19:24.919 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 523], 00:19:24.919 | 99.00th=[ 660], 99.50th=[ 807], 99.90th=[41157], 99.95th=[41157], 00:19:24.919 | 99.99th=[41157] 00:19:24.919 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:19:24.919 slat (nsec): min=6199, max=41017, avg=12728.03, stdev=5213.36 00:19:24.919 clat (usec): min=160, max=1217, avg=205.95, stdev=42.89 00:19:24.919 lat (usec): min=173, max=1233, avg=218.68, stdev=42.35 00:19:24.919 clat percentiles (usec): 00:19:24.919 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:19:24.919 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 208], 00:19:24.919 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 249], 00:19:24.919 | 99.00th=[ 338], 99.50th=[ 383], 99.90th=[ 725], 99.95th=[ 1221], 00:19:24.919 | 99.99th=[ 1221] 00:19:24.919 bw ( KiB/s): min= 4096, max= 8192, per=32.32%, avg=6144.00, stdev=2896.31, samples=2 00:19:24.919 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:24.919 lat (usec) : 250=57.35%, 500=39.14%, 750=3.24%, 1000=0.04% 00:19:24.919 lat (msec) : 2=0.04%, 50=0.19% 00:19:24.919 cpu : usr=2.47%, sys=4.64%, ctx=2566, majf=0, minf=1 00:19:24.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.919 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.919 job1: (groupid=0, jobs=1): err= 0: pid=644809: Mon Jul 15 06:48:12 2024 00:19:24.919 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:24.919 slat (nsec): min=5836, max=58513, avg=12581.34, stdev=5849.69 00:19:24.919 clat (usec): min=227, max=652, avg=349.95, stdev=67.16 00:19:24.919 lat (usec): min=234, max=659, avg=362.53, stdev=67.39 00:19:24.919 clat percentiles (usec): 00:19:24.919 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 314], 00:19:24.919 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:19:24.919 | 70.00th=[ 343], 80.00th=[ 400], 90.00th=[ 457], 95.00th=[ 494], 00:19:24.919 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 652], 00:19:24.919 | 99.99th=[ 652] 00:19:24.919 write: IOPS=1793, BW=7173KiB/s (7345kB/s)(7180KiB/1001msec); 0 zone resets 00:19:24.919 slat (nsec): min=7698, max=62786, avg=17203.95, stdev=7714.20 00:19:24.919 clat (usec): min=173, max=493, avg=221.56, stdev=37.43 00:19:24.919 lat (usec): min=182, max=552, avg=238.76, stdev=41.16 00:19:24.919 clat percentiles (usec): 00:19:24.919 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:19:24.919 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:19:24.919 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 255], 95.00th=[ 297], 00:19:24.919 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 490], 99.95th=[ 494], 00:19:24.919 | 99.99th=[ 494] 00:19:24.919 bw ( KiB/s): min= 8192, max= 8192, per=43.09%, avg=8192.00, stdev= 0.00, samples=1 00:19:24.919 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:24.919 lat (usec) : 250=48.45%, 500=49.92%, 750=1.62% 00:19:24.919 cpu : usr=3.80%, sys=6.60%, ctx=3333, majf=0, minf=1 00:19:24.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.919 issued rwts: total=1536,1795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.919 job2: (groupid=0, jobs=1): err= 0: pid=644810: Mon Jul 15 06:48:12 2024 00:19:24.919 read: IOPS=906, BW=3624KiB/s (3711kB/s)(3628KiB/1001msec) 00:19:24.919 slat (nsec): min=5250, max=59028, avg=13547.29, stdev=7747.53 00:19:24.919 clat (usec): min=247, max=41935, avg=833.39, stdev=4472.03 00:19:24.919 lat (usec): min=255, max=41952, avg=846.93, stdev=4473.86 00:19:24.919 clat percentiles (usec): 00:19:24.919 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:19:24.919 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 363], 00:19:24.919 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 437], 00:19:24.919 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:24.919 | 99.99th=[41681] 00:19:24.919 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:24.919 slat (nsec): min=6318, max=34970, avg=10706.59, stdev=4560.34 00:19:24.919 clat (usec): min=167, max=438, avg=208.77, stdev=23.35 00:19:24.919 lat (usec): min=174, max=446, avg=219.48, stdev=24.51 00:19:24.919 clat percentiles (usec): 00:19:24.920 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:19:24.920 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:19:24.920 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 243], 00:19:24.920 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 379], 99.95th=[ 441], 00:19:24.920 | 99.99th=[ 441] 00:19:24.920 bw ( KiB/s): min= 4096, max= 4096, per=21.54%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.920 lat (usec) : 250=51.99%, 500=47.33%, 750=0.10% 00:19:24.920 lat (msec) : 50=0.57% 00:19:24.920 cpu : usr=1.50%, sys=2.10%, ctx=1932, majf=0, minf=1 00:19:24.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.920 issued rwts: total=907,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.920 job3: (groupid=0, jobs=1): err= 0: pid=644811: Mon Jul 15 06:48:12 2024 00:19:24.920 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:19:24.920 slat (nsec): min=14078, max=35139, avg=23947.95, stdev=10282.00 00:19:24.920 clat (usec): min=40500, max=41998, avg=41082.29, stdev=380.26 00:19:24.920 lat (usec): min=40515, max=42012, avg=41106.24, stdev=379.44 00:19:24.920 clat percentiles (usec): 00:19:24.920 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:24.920 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:24.920 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:24.920 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:24.920 | 99.99th=[42206] 00:19:24.920 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:19:24.920 slat (nsec): min=7236, max=59272, avg=13010.99, stdev=5179.82 00:19:24.920 clat (usec): min=181, max=409, avg=215.10, stdev=23.37 00:19:24.920 lat (usec): min=191, max=425, avg=228.11, stdev=25.14 00:19:24.920 clat percentiles (usec): 00:19:24.920 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:19:24.920 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:19:24.920 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:19:24.920 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 412], 99.95th=[ 412], 00:19:24.920 | 99.99th=[ 412] 00:19:24.920 bw ( KiB/s): min= 4096, max= 4096, per=21.54%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.920 lat (usec) : 250=90.82%, 500=5.06% 00:19:24.920 lat (msec) : 50=4.12% 00:19:24.920 cpu : usr=0.10%, sys=0.88%, ctx=536, majf=0, minf=2 00:19:24.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.920 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.920 00:19:24.920 Run status group 0 (all jobs): 00:19:24.920 READ: bw=13.3MiB/s (14.0MB/s), 85.9KiB/s-6138KiB/s (88.0kB/s-6285kB/s), io=13.6MiB (14.3MB), run=1001-1024msec 00:19:24.920 WRITE: bw=18.6MiB/s (19.5MB/s), 2000KiB/s-7173KiB/s (2048kB/s-7345kB/s), io=19.0MiB (19.9MB), run=1001-1024msec 00:19:24.920 00:19:24.920 Disk stats (read/write): 00:19:24.920 nvme0n1: ios=1050/1534, merge=0/0, ticks=1402/309, in_queue=1711, util=98.00% 00:19:24.920 nvme0n2: ios=1186/1536, merge=0/0, ticks=1357/309, in_queue=1666, util=98.35% 00:19:24.920 nvme0n3: ios=570/697, merge=0/0, ticks=891/151, in_queue=1042, util=98.16% 00:19:24.920 nvme0n4: ios=37/512, merge=0/0, ticks=1530/112, in_queue=1642, util=98.35% 00:19:24.920 06:48:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:24.920 [global] 00:19:24.920 thread=1 00:19:24.920 invalidate=1 00:19:24.920 rw=randwrite 00:19:24.920 time_based=1 00:19:24.920 runtime=1 00:19:24.920 ioengine=libaio 00:19:24.920 direct=1 00:19:24.920 bs=4096 00:19:24.920 iodepth=1 00:19:24.920 norandommap=0 00:19:24.920 numjobs=1 00:19:24.920 00:19:24.920 verify_dump=1 00:19:24.920 verify_backlog=512 00:19:24.920 verify_state_save=0 00:19:24.920 do_verify=1 00:19:24.920 verify=crc32c-intel 00:19:24.920 [job0] 00:19:24.920 filename=/dev/nvme0n1 00:19:24.920 [job1] 00:19:24.920 filename=/dev/nvme0n2 00:19:24.920 [job2] 00:19:24.920 filename=/dev/nvme0n3 00:19:24.920 [job3] 00:19:24.920 filename=/dev/nvme0n4 00:19:24.920 Could not set queue depth (nvme0n1) 00:19:24.920 Could not set queue depth (nvme0n2) 00:19:24.920 Could not set queue depth (nvme0n3) 00:19:24.920 Could not set queue depth (nvme0n4) 00:19:25.179 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.179 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.179 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.179 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.179 fio-3.35 00:19:25.179 Starting 4 threads 00:19:26.555 00:19:26.555 job0: (groupid=0, jobs=1): err= 0: pid=645153: Mon Jul 15 06:48:13 2024 00:19:26.555 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:19:26.555 slat (nsec): min=9393, max=34441, avg=23847.43, stdev=9096.88 00:19:26.555 clat (usec): min=40649, max=42013, avg=41619.50, stdev=509.04 00:19:26.555 lat (usec): min=40658, max=42028, avg=41643.34, stdev=513.11 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:26.555 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:26.555 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:26.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:26.555 | 99.99th=[42206] 00:19:26.555 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:26.555 slat (nsec): min=8879, max=53506, avg=17097.28, stdev=5308.05 00:19:26.555 clat (usec): min=179, max=385, avg=228.56, stdev=27.58 00:19:26.555 lat (usec): min=196, max=405, avg=245.66, stdev=28.21 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:19:26.555 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:19:26.555 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:19:26.555 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 388], 00:19:26.555 | 99.99th=[ 388] 00:19:26.555 bw ( KiB/s): min= 4096, max= 4096, per=31.22%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.555 lat (usec) : 250=82.36%, 500=13.70% 00:19:26.555 lat (msec) : 50=3.94% 00:19:26.555 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=1 00:19:26.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.555 job1: (groupid=0, jobs=1): err= 0: pid=645154: Mon Jul 15 06:48:13 2024 00:19:26.555 read: IOPS=429, BW=1718KiB/s (1760kB/s)(1720KiB/1001msec) 00:19:26.555 slat (nsec): min=5426, max=33284, avg=12866.57, stdev=5026.73 00:19:26.555 clat (usec): min=235, max=42026, avg=2024.68, stdev=8176.89 00:19:26.555 lat (usec): min=243, max=42042, avg=2037.55, stdev=8179.57 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 277], 00:19:26.555 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:19:26.555 | 70.00th=[ 314], 80.00th=[ 367], 90.00th=[ 498], 95.00th=[ 519], 00:19:26.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:26.555 | 99.99th=[42206] 00:19:26.555 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:26.555 slat (nsec): min=8672, max=63776, avg=16232.57, stdev=6108.46 00:19:26.555 clat (usec): min=170, max=441, avg=218.38, stdev=30.99 00:19:26.555 lat (usec): min=179, max=461, avg=234.62, stdev=33.24 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:19:26.555 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:19:26.555 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 273], 00:19:26.555 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 441], 99.95th=[ 441], 00:19:26.555 | 99.99th=[ 441] 00:19:26.555 bw ( KiB/s): min= 4096, max= 4096, per=31.22%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.555 lat (usec) : 250=48.83%, 500=47.13%, 750=2.12% 00:19:26.555 lat (msec) : 50=1.91% 00:19:26.555 cpu : usr=0.80%, sys=1.40%, ctx=943, majf=0, minf=2 00:19:26.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 issued rwts: total=430,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.555 job2: (groupid=0, jobs=1): err= 0: pid=645155: Mon Jul 15 06:48:13 2024 00:19:26.555 read: IOPS=1165, BW=4663KiB/s (4775kB/s)(4668KiB/1001msec) 00:19:26.555 slat (nsec): min=5262, max=60789, avg=17027.57, stdev=9917.91 00:19:26.555 clat (usec): min=233, max=42539, avg=543.24, stdev=3169.64 00:19:26.555 lat (usec): min=241, max=42573, avg=560.27, stdev=3170.85 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:19:26.555 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:19:26.555 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 375], 00:19:26.555 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:19:26.555 | 99.99th=[42730] 00:19:26.555 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:26.555 slat (nsec): min=7229, max=53627, avg=14210.19, stdev=5954.58 00:19:26.555 clat (usec): min=161, max=403, avg=203.68, stdev=30.96 00:19:26.555 lat (usec): min=169, max=427, avg=217.89, stdev=33.19 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:19:26.555 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:19:26.555 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 265], 00:19:26.555 | 99.00th=[ 306], 99.50th=[ 334], 99.90th=[ 379], 99.95th=[ 404], 00:19:26.555 | 99.99th=[ 404] 00:19:26.555 bw ( KiB/s): min= 8192, max= 8192, per=62.44%, avg=8192.00, stdev= 0.00, samples=1 00:19:26.555 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:26.555 lat (usec) : 250=55.09%, 500=44.17%, 750=0.48% 00:19:26.555 lat (msec) : 50=0.26% 00:19:26.555 cpu : usr=2.40%, sys=4.60%, ctx=2704, majf=0, minf=1 00:19:26.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 issued rwts: total=1167,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.555 job3: (groupid=0, jobs=1): err= 0: pid=645156: Mon Jul 15 06:48:13 2024 00:19:26.555 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:26.555 slat (nsec): min=5398, max=51610, avg=16725.21, stdev=8621.93 00:19:26.555 clat (usec): min=252, max=42053, avg=1608.79, stdev=7208.54 00:19:26.555 lat (usec): min=258, max=42086, avg=1625.51, stdev=7210.11 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:19:26.555 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:19:26.555 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 388], 00:19:26.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:26.555 | 99.99th=[42206] 00:19:26.555 write: IOPS=729, BW=2917KiB/s (2987kB/s)(2920KiB/1001msec); 0 zone resets 00:19:26.555 slat (nsec): min=6643, max=44927, avg=13130.61, stdev=5448.15 00:19:26.555 clat (usec): min=169, max=370, avg=209.70, stdev=23.58 00:19:26.555 lat (usec): min=177, max=387, avg=222.83, stdev=26.20 00:19:26.555 clat percentiles (usec): 00:19:26.555 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:19:26.555 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:19:26.555 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 247], 00:19:26.555 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 371], 99.95th=[ 371], 00:19:26.555 | 99.99th=[ 371] 00:19:26.555 bw ( KiB/s): min= 4096, max= 4096, per=31.22%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.555 lat (usec) : 250=56.20%, 500=42.27%, 750=0.16% 00:19:26.555 lat (msec) : 2=0.08%, 50=1.29% 00:19:26.555 cpu : usr=1.10%, sys=1.80%, ctx=1242, majf=0, minf=1 00:19:26.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.555 issued rwts: total=512,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.555 00:19:26.555 Run status group 0 (all jobs): 00:19:26.555 READ: bw=8495KiB/s (8698kB/s), 83.7KiB/s-4663KiB/s (85.8kB/s-4775kB/s), io=8520KiB (8724kB), run=1001-1003msec 00:19:26.555 WRITE: bw=12.8MiB/s (13.4MB/s), 2042KiB/s-6138KiB/s (2091kB/s-6285kB/s), io=12.9MiB (13.5MB), run=1001-1003msec 00:19:26.555 00:19:26.556 Disk stats (read/write): 00:19:26.556 nvme0n1: ios=68/512, merge=0/0, ticks=1677/111, in_queue=1788, util=97.80% 00:19:26.556 nvme0n2: ios=45/512, merge=0/0, ticks=1656/113, in_queue=1769, util=98.07% 00:19:26.556 nvme0n3: ios=1063/1024, merge=0/0, ticks=1523/189, in_queue=1712, util=98.01% 00:19:26.556 nvme0n4: ios=231/512, merge=0/0, ticks=692/108, in_queue=800, util=89.64% 00:19:26.556 06:48:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:26.556 [global] 00:19:26.556 thread=1 00:19:26.556 invalidate=1 00:19:26.556 rw=write 00:19:26.556 time_based=1 00:19:26.556 runtime=1 00:19:26.556 ioengine=libaio 00:19:26.556 direct=1 00:19:26.556 bs=4096 00:19:26.556 iodepth=128 00:19:26.556 norandommap=0 00:19:26.556 numjobs=1 00:19:26.556 00:19:26.556 verify_dump=1 00:19:26.556 verify_backlog=512 00:19:26.556 verify_state_save=0 00:19:26.556 do_verify=1 00:19:26.556 verify=crc32c-intel 00:19:26.556 [job0] 00:19:26.556 filename=/dev/nvme0n1 00:19:26.556 [job1] 00:19:26.556 filename=/dev/nvme0n2 00:19:26.556 [job2] 00:19:26.556 filename=/dev/nvme0n3 00:19:26.556 [job3] 00:19:26.556 filename=/dev/nvme0n4 00:19:26.556 Could not set queue depth (nvme0n1) 00:19:26.556 Could not set queue depth (nvme0n2) 00:19:26.556 Could not set queue depth (nvme0n3) 00:19:26.556 Could not set queue depth (nvme0n4) 00:19:26.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.556 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.556 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.556 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.556 fio-3.35 00:19:26.556 Starting 4 threads 00:19:27.932 00:19:27.932 job0: (groupid=0, jobs=1): err= 0: pid=645387: Mon Jul 15 06:48:15 2024 00:19:27.932 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:19:27.932 slat (usec): min=3, max=9936, avg=96.46, stdev=668.49 00:19:27.932 clat (usec): min=3833, max=22290, avg=11903.61, stdev=2818.89 00:19:27.932 lat (usec): min=3839, max=22296, avg=12000.08, stdev=2863.29 00:19:27.932 clat percentiles (usec): 00:19:27.932 | 1.00th=[ 4555], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[10683], 00:19:27.932 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:27.932 | 70.00th=[11469], 80.00th=[13173], 90.00th=[16581], 95.00th=[17957], 00:19:27.932 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21627], 99.95th=[22414], 00:19:27.932 | 99.99th=[22414] 00:19:27.932 write: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(22.7MiB/1011msec); 0 zone resets 00:19:27.932 slat (usec): min=4, max=8763, avg=72.37, stdev=353.92 00:19:27.932 clat (usec): min=2405, max=22344, avg=10527.99, stdev=2899.31 00:19:27.932 lat (usec): min=2413, max=22351, avg=10600.36, stdev=2918.32 00:19:27.932 clat percentiles (usec): 00:19:27.932 | 1.00th=[ 3163], 5.00th=[ 5080], 10.00th=[ 6783], 20.00th=[ 8848], 00:19:27.932 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11076], 60.00th=[11207], 00:19:27.932 | 70.00th=[11338], 80.00th=[11469], 90.00th=[12780], 95.00th=[14746], 00:19:27.932 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 00:19:27.932 | 99.99th=[22414] 00:19:27.932 bw ( KiB/s): min=21000, max=24432, per=33.71%, avg=22716.00, stdev=2426.79, samples=2 00:19:27.932 iops : min= 5250, max= 6108, avg=5679.00, stdev=606.70, samples=2 00:19:27.932 lat (msec) : 4=1.41%, 10=17.74%, 20=79.25%, 50=1.60% 00:19:27.932 cpu : usr=4.16%, sys=7.62%, ctx=701, majf=0, minf=1 00:19:27.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:27.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.932 issued rwts: total=5632,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.932 job1: (groupid=0, jobs=1): err= 0: pid=645388: Mon Jul 15 06:48:15 2024 00:19:27.932 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:19:27.932 slat (usec): min=3, max=33646, avg=163.65, stdev=1260.49 00:19:27.932 clat (usec): min=5615, max=71556, avg=19853.55, stdev=13085.18 00:19:27.932 lat (usec): min=5621, max=80743, avg=20017.20, stdev=13197.57 00:19:27.932 clat percentiles (usec): 00:19:27.932 | 1.00th=[ 7308], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11338], 00:19:27.932 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13566], 60.00th=[16450], 00:19:27.932 | 70.00th=[22414], 80.00th=[24249], 90.00th=[41157], 95.00th=[51119], 00:19:27.932 | 99.00th=[66323], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:19:27.932 | 99.99th=[71828] 00:19:27.932 write: IOPS=3361, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1011msec); 0 zone resets 00:19:27.932 slat (usec): min=3, max=22773, avg=135.27, stdev=881.55 00:19:27.932 clat (usec): min=3568, max=71518, avg=18871.22, stdev=7208.42 00:19:27.933 lat (usec): min=3577, max=71528, avg=19006.49, stdev=7259.41 00:19:27.933 clat percentiles (usec): 00:19:27.933 | 1.00th=[ 5014], 5.00th=[ 7635], 10.00th=[10159], 20.00th=[12256], 00:19:27.933 | 30.00th=[13960], 40.00th=[16450], 50.00th=[19792], 60.00th=[22676], 00:19:27.933 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[26084], 00:19:27.933 | 99.00th=[43254], 99.50th=[48497], 99.90th=[62129], 99.95th=[71828], 00:19:27.933 | 99.99th=[71828] 00:19:27.933 bw ( KiB/s): min=12280, max=13888, per=19.42%, avg=13084.00, stdev=1137.03, samples=2 00:19:27.933 iops : min= 3070, max= 3472, avg=3271.00, stdev=284.26, samples=2 00:19:27.933 lat (msec) : 4=0.26%, 10=7.17%, 20=49.95%, 50=40.03%, 100=2.58% 00:19:27.933 cpu : usr=3.96%, sys=8.22%, ctx=308, majf=0, minf=1 00:19:27.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:27.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.933 issued rwts: total=3072,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.933 job2: (groupid=0, jobs=1): err= 0: pid=645389: Mon Jul 15 06:48:15 2024 00:19:27.933 read: IOPS=4823, BW=18.8MiB/s (19.8MB/s)(19.0MiB/1006msec) 00:19:27.933 slat (usec): min=4, max=5716, avg=100.64, stdev=587.60 00:19:27.933 clat (usec): min=1494, max=19365, avg=12547.27, stdev=1643.20 00:19:27.933 lat (usec): min=5318, max=19404, avg=12647.91, stdev=1694.81 00:19:27.933 clat percentiles (usec): 00:19:27.933 | 1.00th=[ 5800], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11994], 00:19:27.933 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:19:27.933 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14222], 95.00th=[15533], 00:19:27.933 | 99.00th=[17433], 99.50th=[17433], 99.90th=[18220], 99.95th=[18744], 00:19:27.933 | 99.99th=[19268] 00:19:27.933 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:19:27.933 slat (usec): min=3, max=18364, avg=88.89, stdev=434.22 00:19:27.933 clat (usec): min=6276, max=30742, avg=12945.93, stdev=2533.61 00:19:27.933 lat (usec): min=6288, max=30804, avg=13034.83, stdev=2547.17 00:19:27.933 clat percentiles (usec): 00:19:27.933 | 1.00th=[ 7701], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[12125], 00:19:27.933 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:27.933 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14746], 95.00th=[16712], 00:19:27.933 | 99.00th=[26608], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:19:27.933 | 99.99th=[30802] 00:19:27.933 bw ( KiB/s): min=20480, max=20480, per=30.39%, avg=20480.00, stdev= 0.00, samples=2 00:19:27.933 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:27.933 lat (msec) : 2=0.01%, 10=6.31%, 20=92.41%, 50=1.27% 00:19:27.933 cpu : usr=8.16%, sys=11.44%, ctx=586, majf=0, minf=1 00:19:27.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:27.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.933 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.933 job3: (groupid=0, jobs=1): err= 0: pid=645390: Mon Jul 15 06:48:15 2024 00:19:27.933 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:19:27.933 slat (usec): min=2, max=30474, avg=213.62, stdev=1655.64 00:19:27.933 clat (msec): min=3, max=117, avg=27.72, stdev=19.36 00:19:27.933 lat (msec): min=3, max=117, avg=27.93, stdev=19.49 00:19:27.933 clat percentiles (msec): 00:19:27.933 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 18], 00:19:27.933 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 26], 00:19:27.933 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 77], 00:19:27.933 | 99.00th=[ 107], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:19:27.933 | 99.99th=[ 118] 00:19:27.933 write: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1002msec); 0 zone resets 00:19:27.933 slat (usec): min=4, max=16439, avg=154.31, stdev=835.70 00:19:27.933 clat (usec): min=360, max=49850, avg=20716.31, stdev=9200.14 00:19:27.933 lat (usec): min=735, max=49860, avg=20870.62, stdev=9254.46 00:19:27.933 clat percentiles (usec): 00:19:27.933 | 1.00th=[ 3720], 5.00th=[ 8094], 10.00th=[10159], 20.00th=[11338], 00:19:27.933 | 30.00th=[13960], 40.00th=[18744], 50.00th=[23200], 60.00th=[23725], 00:19:27.933 | 70.00th=[23987], 80.00th=[24511], 90.00th=[33424], 95.00th=[39060], 00:19:27.933 | 99.00th=[45876], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:19:27.933 | 99.99th=[50070] 00:19:27.933 bw ( KiB/s): min= 8424, max=12288, per=15.37%, avg=10356.00, stdev=2732.26, samples=2 00:19:27.933 iops : min= 2106, max= 3072, avg=2589.00, stdev=683.07, samples=2 00:19:27.933 lat (usec) : 500=0.02%, 750=0.08% 00:19:27.933 lat (msec) : 2=0.02%, 4=0.97%, 10=5.60%, 20=28.37%, 50=60.73% 00:19:27.933 lat (msec) : 100=2.45%, 250=1.77% 00:19:27.933 cpu : usr=4.30%, sys=5.39%, ctx=260, majf=0, minf=1 00:19:27.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:27.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.933 issued rwts: total=2560,2709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.933 00:19:27.933 Run status group 0 (all jobs): 00:19:27.933 READ: bw=62.3MiB/s (65.3MB/s), 9.98MiB/s-21.8MiB/s (10.5MB/s-22.8MB/s), io=63.0MiB (66.0MB), run=1002-1011msec 00:19:27.933 WRITE: bw=65.8MiB/s (69.0MB/s), 10.6MiB/s-22.4MiB/s (11.1MB/s-23.5MB/s), io=66.5MiB (69.8MB), run=1002-1011msec 00:19:27.933 00:19:27.933 Disk stats (read/write): 00:19:27.933 nvme0n1: ios=4649/5031, merge=0/0, ticks=53612/51637, in_queue=105249, util=99.40% 00:19:27.933 nvme0n2: ios=2306/2560, merge=0/0, ticks=50073/50612, in_queue=100685, util=95.33% 00:19:27.933 nvme0n3: ios=4153/4327, merge=0/0, ticks=24718/25168, in_queue=49886, util=90.82% 00:19:27.933 nvme0n4: ios=2360/2560, merge=0/0, ticks=47801/52999, in_queue=100800, util=99.05% 00:19:27.933 06:48:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:27.933 [global] 00:19:27.933 thread=1 00:19:27.933 invalidate=1 00:19:27.933 rw=randwrite 00:19:27.933 time_based=1 00:19:27.933 runtime=1 00:19:27.933 ioengine=libaio 00:19:27.933 direct=1 00:19:27.933 bs=4096 00:19:27.933 iodepth=128 00:19:27.933 norandommap=0 00:19:27.933 numjobs=1 00:19:27.933 00:19:27.933 verify_dump=1 00:19:27.933 verify_backlog=512 00:19:27.933 verify_state_save=0 00:19:27.933 do_verify=1 00:19:27.933 verify=crc32c-intel 00:19:27.933 [job0] 00:19:27.933 filename=/dev/nvme0n1 00:19:27.933 [job1] 00:19:27.933 filename=/dev/nvme0n2 00:19:27.933 [job2] 00:19:27.933 filename=/dev/nvme0n3 00:19:27.933 [job3] 00:19:27.933 filename=/dev/nvme0n4 00:19:27.933 Could not set queue depth (nvme0n1) 00:19:27.933 Could not set queue depth (nvme0n2) 00:19:27.933 Could not set queue depth (nvme0n3) 00:19:27.933 Could not set queue depth (nvme0n4) 00:19:27.933 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.933 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.933 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.933 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.933 fio-3.35 00:19:27.933 Starting 4 threads 00:19:29.310 00:19:29.310 job0: (groupid=0, jobs=1): err= 0: pid=645619: Mon Jul 15 06:48:16 2024 00:19:29.310 read: IOPS=3844, BW=15.0MiB/s (15.7MB/s)(15.2MiB/1010msec) 00:19:29.310 slat (usec): min=2, max=7798, avg=114.97, stdev=628.94 00:19:29.310 clat (usec): min=3407, max=26886, avg=14998.38, stdev=2664.17 00:19:29.310 lat (usec): min=9246, max=26892, avg=15113.35, stdev=2703.07 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:19:29.310 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14615], 60.00th=[15008], 00:19:29.310 | 70.00th=[15795], 80.00th=[17171], 90.00th=[18744], 95.00th=[19792], 00:19:29.310 | 99.00th=[22938], 99.50th=[22938], 99.90th=[24773], 99.95th=[26870], 00:19:29.310 | 99.99th=[26870] 00:19:29.310 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:19:29.310 slat (usec): min=4, max=10069, avg=124.32, stdev=641.10 00:19:29.310 clat (usec): min=8296, max=31163, avg=16940.85, stdev=4851.44 00:19:29.310 lat (usec): min=8304, max=31172, avg=17065.17, stdev=4905.99 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:19:29.310 | 30.00th=[12780], 40.00th=[13829], 50.00th=[16057], 60.00th=[18220], 00:19:29.310 | 70.00th=[20317], 80.00th=[21627], 90.00th=[23987], 95.00th=[25035], 00:19:29.310 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31065], 99.95th=[31065], 00:19:29.310 | 99.99th=[31065] 00:19:29.310 bw ( KiB/s): min=16384, max=16384, per=25.31%, avg=16384.00, stdev= 0.00, samples=2 00:19:29.310 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:29.310 lat (msec) : 4=0.01%, 10=0.36%, 20=80.02%, 50=19.60% 00:19:29.310 cpu : usr=6.84%, sys=9.32%, ctx=371, majf=0, minf=1 00:19:29.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:29.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.310 issued rwts: total=3883,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.310 job1: (groupid=0, jobs=1): err= 0: pid=645620: Mon Jul 15 06:48:16 2024 00:19:29.310 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:19:29.310 slat (usec): min=2, max=13756, avg=197.78, stdev=1112.18 00:19:29.310 clat (usec): min=13519, max=50894, avg=24548.00, stdev=7626.96 00:19:29.310 lat (usec): min=13538, max=51008, avg=24745.78, stdev=7724.69 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[14615], 5.00th=[16057], 10.00th=[17171], 20.00th=[17695], 00:19:29.310 | 30.00th=[17957], 40.00th=[19268], 50.00th=[21103], 60.00th=[27657], 00:19:29.310 | 70.00th=[28967], 80.00th=[32113], 90.00th=[36439], 95.00th=[38011], 00:19:29.310 | 99.00th=[41681], 99.50th=[44827], 99.90th=[45876], 99.95th=[47973], 00:19:29.310 | 99.99th=[51119] 00:19:29.310 write: IOPS=2378, BW=9513KiB/s (9741kB/s)(9608KiB/1010msec); 0 zone resets 00:19:29.310 slat (usec): min=3, max=10699, avg=237.41, stdev=914.78 00:19:29.310 clat (usec): min=3133, max=63553, avg=32082.28, stdev=12404.10 00:19:29.310 lat (usec): min=10104, max=63574, avg=32319.69, stdev=12479.39 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[12780], 5.00th=[18482], 10.00th=[20055], 20.00th=[20579], 00:19:29.310 | 30.00th=[21890], 40.00th=[24511], 50.00th=[28181], 60.00th=[34341], 00:19:29.310 | 70.00th=[40633], 80.00th=[44303], 90.00th=[48497], 95.00th=[55313], 00:19:29.310 | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:19:29.310 | 99.99th=[63701] 00:19:29.310 bw ( KiB/s): min= 8976, max= 9216, per=14.05%, avg=9096.00, stdev=169.71, samples=2 00:19:29.310 iops : min= 2244, max= 2304, avg=2274.00, stdev=42.43, samples=2 00:19:29.310 lat (msec) : 4=0.02%, 20=26.31%, 50=69.98%, 100=3.69% 00:19:29.310 cpu : usr=4.46%, sys=5.55%, ctx=316, majf=0, minf=1 00:19:29.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:29.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.310 issued rwts: total=2048,2402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.310 job2: (groupid=0, jobs=1): err= 0: pid=645621: Mon Jul 15 06:48:16 2024 00:19:29.310 read: IOPS=4961, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec) 00:19:29.310 slat (usec): min=2, max=11015, avg=109.83, stdev=734.16 00:19:29.310 clat (usec): min=1557, max=23190, avg=13341.00, stdev=2877.57 00:19:29.310 lat (usec): min=4074, max=23206, avg=13450.83, stdev=2908.38 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[ 4883], 5.00th=[10421], 10.00th=[11207], 20.00th=[11731], 00:19:29.310 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13435], 00:19:29.310 | 70.00th=[13960], 80.00th=[15008], 90.00th=[17695], 95.00th=[19530], 00:19:29.310 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22938], 99.95th=[23200], 00:19:29.310 | 99.99th=[23200] 00:19:29.310 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:29.310 slat (usec): min=3, max=9807, avg=82.86, stdev=389.85 00:19:29.310 clat (usec): min=1193, max=23258, avg=11759.87, stdev=2800.36 00:19:29.310 lat (usec): min=1202, max=23279, avg=11842.72, stdev=2811.10 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[ 3228], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[10421], 00:19:29.310 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:19:29.310 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13698], 95.00th=[15270], 00:19:29.310 | 99.00th=[20055], 99.50th=[22152], 99.90th=[23200], 99.95th=[23200], 00:19:29.310 | 99.99th=[23200] 00:19:29.310 bw ( KiB/s): min=20480, max=20480, per=31.64%, avg=20480.00, stdev= 0.00, samples=2 00:19:29.310 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:29.310 lat (msec) : 2=0.05%, 4=0.84%, 10=9.50%, 20=87.05%, 50=2.56% 00:19:29.310 cpu : usr=3.39%, sys=6.57%, ctx=631, majf=0, minf=1 00:19:29.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:29.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.310 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.310 job3: (groupid=0, jobs=1): err= 0: pid=645622: Mon Jul 15 06:48:16 2024 00:19:29.310 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:19:29.310 slat (usec): min=3, max=15082, avg=114.10, stdev=828.78 00:19:29.310 clat (usec): min=5324, max=31821, avg=14809.50, stdev=3417.66 00:19:29.310 lat (usec): min=5346, max=31892, avg=14923.60, stdev=3481.74 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[11863], 20.00th=[12649], 00:19:29.310 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13566], 60.00th=[14615], 00:19:29.310 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20055], 95.00th=[21890], 00:19:29.310 | 99.00th=[26608], 99.50th=[27395], 99.90th=[29492], 99.95th=[29754], 00:19:29.310 | 99.99th=[31851] 00:19:29.310 write: IOPS=4681, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1010msec); 0 zone resets 00:19:29.310 slat (usec): min=4, max=14477, avg=86.30, stdev=599.52 00:19:29.310 clat (usec): min=621, max=29600, avg=12602.64, stdev=3641.09 00:19:29.310 lat (usec): min=1039, max=29620, avg=12688.94, stdev=3669.15 00:19:29.310 clat percentiles (usec): 00:19:29.310 | 1.00th=[ 3556], 5.00th=[ 6128], 10.00th=[ 7701], 20.00th=[ 9634], 00:19:29.310 | 30.00th=[11207], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:19:29.310 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15926], 95.00th=[17957], 00:19:29.310 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:19:29.310 | 99.99th=[29492] 00:19:29.310 bw ( KiB/s): min=16456, max=20464, per=28.52%, avg=18460.00, stdev=2834.08, samples=2 00:19:29.310 iops : min= 4114, max= 5116, avg=4615.00, stdev=708.52, samples=2 00:19:29.310 lat (usec) : 750=0.01% 00:19:29.310 lat (msec) : 2=0.21%, 4=0.48%, 10=11.10%, 20=82.48%, 50=5.72% 00:19:29.310 cpu : usr=7.43%, sys=10.31%, ctx=377, majf=0, minf=1 00:19:29.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:29.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.310 issued rwts: total=4608,4728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.310 00:19:29.310 Run status group 0 (all jobs): 00:19:29.310 READ: bw=60.0MiB/s (63.0MB/s), 8111KiB/s-19.4MiB/s (8306kB/s-20.3MB/s), io=60.6MiB (63.6MB), run=1005-1010msec 00:19:29.310 WRITE: bw=63.2MiB/s (66.3MB/s), 9513KiB/s-19.9MiB/s (9741kB/s-20.9MB/s), io=63.9MiB (67.0MB), run=1005-1010msec 00:19:29.310 00:19:29.310 Disk stats (read/write): 00:19:29.310 nvme0n1: ios=3360/3584, merge=0/0, ticks=25349/26181, in_queue=51530, util=99.30% 00:19:29.310 nvme0n2: ios=1561/1999, merge=0/0, ticks=14349/21647, in_queue=35996, util=99.29% 00:19:29.310 nvme0n3: ios=4096/4359, merge=0/0, ticks=42814/39579, in_queue=82393, util=88.78% 00:19:29.310 nvme0n4: ios=3759/4096, merge=0/0, ticks=52611/46601, in_queue=99212, util=95.98% 00:19:29.310 06:48:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:29.310 06:48:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=645754 00:19:29.310 06:48:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:29.310 06:48:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:29.310 [global] 00:19:29.310 thread=1 00:19:29.310 invalidate=1 00:19:29.310 rw=read 00:19:29.310 time_based=1 00:19:29.310 runtime=10 00:19:29.310 ioengine=libaio 00:19:29.310 direct=1 00:19:29.310 bs=4096 00:19:29.310 iodepth=1 00:19:29.310 norandommap=1 00:19:29.310 numjobs=1 00:19:29.310 00:19:29.310 [job0] 00:19:29.310 filename=/dev/nvme0n1 00:19:29.311 [job1] 00:19:29.311 filename=/dev/nvme0n2 00:19:29.311 [job2] 00:19:29.311 filename=/dev/nvme0n3 00:19:29.311 [job3] 00:19:29.311 filename=/dev/nvme0n4 00:19:29.311 Could not set queue depth (nvme0n1) 00:19:29.311 Could not set queue depth (nvme0n2) 00:19:29.311 Could not set queue depth (nvme0n3) 00:19:29.311 Could not set queue depth (nvme0n4) 00:19:29.311 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.311 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.311 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.311 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.311 fio-3.35 00:19:29.311 Starting 4 threads 00:19:32.594 06:48:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:32.594 06:48:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:32.594 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4251648, buflen=4096 00:19:32.594 fio: pid=645947, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:32.594 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.594 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:32.594 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=38653952, buflen=4096 00:19:32.594 fio: pid=645934, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.158 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18690048, buflen=4096 00:19:33.158 fio: pid=645858, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.158 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.158 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:33.158 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.158 06:48:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:33.416 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1806336, buflen=4096 00:19:33.416 fio: pid=645881, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.416 00:19:33.416 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=645858: Mon Jul 15 06:48:20 2024 00:19:33.416 read: IOPS=1334, BW=5337KiB/s (5465kB/s)(17.8MiB/3420msec) 00:19:33.416 slat (usec): min=4, max=28638, avg=27.61, stdev=604.58 00:19:33.416 clat (usec): min=232, max=42018, avg=714.03, stdev=4003.10 00:19:33.416 lat (usec): min=238, max=61635, avg=741.64, stdev=4093.08 00:19:33.416 clat percentiles (usec): 00:19:33.416 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 281], 00:19:33.416 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:19:33.416 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 437], 00:19:33.416 | 99.00th=[ 1123], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:33.416 | 99.99th=[42206] 00:19:33.416 bw ( KiB/s): min= 104, max=10432, per=29.92%, avg=4993.33, stdev=5292.58, samples=6 00:19:33.416 iops : min= 26, max= 2608, avg=1248.33, stdev=1323.14, samples=6 00:19:33.416 lat (usec) : 250=1.86%, 500=95.99%, 750=0.83%, 1000=0.24% 00:19:33.416 lat (msec) : 2=0.07%, 4=0.02%, 50=0.96% 00:19:33.416 cpu : usr=1.11%, sys=2.28%, ctx=4568, majf=0, minf=1 00:19:33.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 issued rwts: total=4564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.416 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=645881: Mon Jul 15 06:48:20 2024 00:19:33.416 read: IOPS=119, BW=475KiB/s (487kB/s)(1764KiB/3710msec) 00:19:33.416 slat (usec): min=4, max=17708, avg=83.53, stdev=977.61 00:19:33.416 clat (usec): min=278, max=60064, avg=8273.60, stdev=16250.69 00:19:33.416 lat (usec): min=282, max=60080, avg=8317.16, stdev=16325.49 00:19:33.416 clat percentiles (usec): 00:19:33.416 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 363], 00:19:33.416 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 445], 60.00th=[ 469], 00:19:33.416 | 70.00th=[ 502], 80.00th=[ 586], 90.00th=[41157], 95.00th=[42206], 00:19:33.416 | 99.00th=[42206], 99.50th=[42206], 99.90th=[60031], 99.95th=[60031], 00:19:33.416 | 99.99th=[60031] 00:19:33.416 bw ( KiB/s): min= 96, max= 2912, per=2.98%, avg=498.57, stdev=1064.22, samples=7 00:19:33.416 iops : min= 24, max= 728, avg=124.57, stdev=266.09, samples=7 00:19:33.416 lat (usec) : 500=68.78%, 750=11.76%, 1000=0.23% 00:19:33.416 lat (msec) : 50=18.78%, 100=0.23% 00:19:33.416 cpu : usr=0.05%, sys=0.19%, ctx=447, majf=0, minf=1 00:19:33.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.416 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=645934: Mon Jul 15 06:48:20 2024 00:19:33.416 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(36.9MiB/3140msec) 00:19:33.416 slat (usec): min=5, max=16700, avg=14.27, stdev=218.20 00:19:33.416 clat (usec): min=235, max=41512, avg=313.18, stdev=633.75 00:19:33.416 lat (usec): min=242, max=41523, avg=327.44, stdev=670.44 00:19:33.416 clat percentiles (usec): 00:19:33.416 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 269], 00:19:33.416 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 306], 00:19:33.416 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 359], 00:19:33.416 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 857], 99.95th=[ 2409], 00:19:33.416 | 99.99th=[41681] 00:19:33.416 bw ( KiB/s): min=10216, max=13872, per=72.35%, avg=12074.67, stdev=1265.05, samples=6 00:19:33.416 iops : min= 2554, max= 3468, avg=3018.67, stdev=316.26, samples=6 00:19:33.416 lat (usec) : 250=2.75%, 500=95.75%, 750=1.37%, 1000=0.02% 00:19:33.416 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02% 00:19:33.416 cpu : usr=1.82%, sys=5.64%, ctx=9443, majf=0, minf=1 00:19:33.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 issued rwts: total=9438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.416 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=645947: Mon Jul 15 06:48:20 2024 00:19:33.416 read: IOPS=359, BW=1436KiB/s (1471kB/s)(4152KiB/2891msec) 00:19:33.416 slat (nsec): min=5739, max=47691, avg=11006.21, stdev=6353.58 00:19:33.416 clat (usec): min=287, max=42061, avg=2749.35, stdev=9659.97 00:19:33.416 lat (usec): min=296, max=42080, avg=2760.35, stdev=9662.55 00:19:33.416 clat percentiles (usec): 00:19:33.416 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:19:33.416 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:19:33.416 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 437], 95.00th=[41157], 00:19:33.416 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:33.416 | 99.99th=[42206] 00:19:33.416 bw ( KiB/s): min= 96, max= 4344, per=9.86%, avg=1646.40, stdev=2132.67, samples=5 00:19:33.416 iops : min= 24, max= 1086, avg=411.60, stdev=533.17, samples=5 00:19:33.416 lat (usec) : 500=92.49%, 750=1.54% 00:19:33.416 lat (msec) : 50=5.87% 00:19:33.416 cpu : usr=0.21%, sys=0.59%, ctx=1041, majf=0, minf=1 00:19:33.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.416 issued rwts: total=1039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.417 00:19:33.417 Run status group 0 (all jobs): 00:19:33.417 READ: bw=16.3MiB/s (17.1MB/s), 475KiB/s-11.7MiB/s (487kB/s-12.3MB/s), io=60.5MiB (63.4MB), run=2891-3710msec 00:19:33.417 00:19:33.417 Disk stats (read/write): 00:19:33.417 nvme0n1: ios=4584/0, merge=0/0, ticks=3374/0, in_queue=3374, util=98.25% 00:19:33.417 nvme0n2: ios=479/0, merge=0/0, ticks=3884/0, in_queue=3884, util=99.30% 00:19:33.417 nvme0n3: ios=9292/0, merge=0/0, ticks=2847/0, in_queue=2847, util=95.75% 00:19:33.417 nvme0n4: ios=1081/0, merge=0/0, ticks=3037/0, in_queue=3037, util=99.45% 00:19:33.417 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.417 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:33.673 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.673 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:33.929 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.929 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:34.186 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:34.186 06:48:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:34.443 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:34.443 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 645754 00:19:34.443 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:34.443 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:34.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:34.700 nvmf hotplug test: fio failed as expected 00:19:34.700 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:34.958 rmmod nvme_tcp 00:19:34.958 rmmod nvme_fabrics 00:19:34.958 rmmod nvme_keyring 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 643341 ']' 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 643341 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 643341 ']' 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 643341 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 643341 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 643341' 00:19:34.958 killing process with pid 643341 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 643341 00:19:34.958 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 643341 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.216 06:48:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.777 06:48:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:37.777 00:19:37.777 real 0m23.267s 00:19:37.777 user 1m21.968s 00:19:37.777 sys 0m6.702s 00:19:37.777 06:48:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:37.777 06:48:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.777 ************************************ 00:19:37.777 END TEST nvmf_fio_target 00:19:37.777 ************************************ 00:19:37.777 06:48:24 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:37.777 06:48:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:37.777 06:48:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:37.777 06:48:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.777 ************************************ 00:19:37.777 START TEST nvmf_bdevio 00:19:37.777 ************************************ 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:37.777 * Looking for test storage... 00:19:37.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.777 06:48:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:39.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:39.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:39.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:39.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.150 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:19:39.407 00:19:39.407 --- 10.0.0.2 ping statistics --- 00:19:39.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.407 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:39.407 00:19:39.407 --- 10.0.0.1 ping statistics --- 00:19:39.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.407 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.407 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=648468 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 648468 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 648468 ']' 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:39.408 06:48:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.408 [2024-07-15 06:48:26.927141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:39.408 [2024-07-15 06:48:26.927227] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.408 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.408 [2024-07-15 06:48:26.995257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.666 [2024-07-15 06:48:27.082603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.666 [2024-07-15 06:48:27.082667] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.666 [2024-07-15 06:48:27.082680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.666 [2024-07-15 06:48:27.082692] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.666 [2024-07-15 06:48:27.082715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.666 [2024-07-15 06:48:27.082808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.666 [2024-07-15 06:48:27.082896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:39.666 [2024-07-15 06:48:27.082918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:39.666 [2024-07-15 06:48:27.082921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.666 [2024-07-15 06:48:27.238551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.666 Malloc0 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.666 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.925 [2024-07-15 06:48:27.291961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.925 { 00:19:39.925 "params": { 00:19:39.925 "name": "Nvme$subsystem", 00:19:39.925 "trtype": "$TEST_TRANSPORT", 00:19:39.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.925 "adrfam": "ipv4", 00:19:39.925 "trsvcid": "$NVMF_PORT", 00:19:39.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.925 "hdgst": ${hdgst:-false}, 00:19:39.925 "ddgst": ${ddgst:-false} 00:19:39.925 }, 00:19:39.925 "method": "bdev_nvme_attach_controller" 00:19:39.925 } 00:19:39.925 EOF 00:19:39.925 )") 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:39.925 06:48:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.925 "params": { 00:19:39.925 "name": "Nvme1", 00:19:39.925 "trtype": "tcp", 00:19:39.925 "traddr": "10.0.0.2", 00:19:39.925 "adrfam": "ipv4", 00:19:39.925 "trsvcid": "4420", 00:19:39.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.925 "hdgst": false, 00:19:39.925 "ddgst": false 00:19:39.925 }, 00:19:39.925 "method": "bdev_nvme_attach_controller" 00:19:39.925 }' 00:19:39.926 [2024-07-15 06:48:27.340226] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:39.926 [2024-07-15 06:48:27.340318] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648615 ] 00:19:39.926 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.926 [2024-07-15 06:48:27.401629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.926 [2024-07-15 06:48:27.494105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.926 [2024-07-15 06:48:27.494158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.926 [2024-07-15 06:48:27.494161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.183 I/O targets: 00:19:40.183 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:40.183 00:19:40.183 00:19:40.183 CUnit - A unit testing framework for C - Version 2.1-3 00:19:40.183 http://cunit.sourceforge.net/ 00:19:40.183 00:19:40.183 00:19:40.183 Suite: bdevio tests on: Nvme1n1 00:19:40.183 Test: blockdev write read block ...passed 00:19:40.441 Test: blockdev write zeroes read block ...passed 00:19:40.441 Test: blockdev write zeroes read no split ...passed 00:19:40.441 Test: blockdev write zeroes read split ...passed 00:19:40.441 Test: blockdev write zeroes read split partial ...passed 00:19:40.441 Test: blockdev reset ...[2024-07-15 06:48:27.921769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.441 [2024-07-15 06:48:27.921891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1012f80 (9): Bad file descriptor 00:19:40.441 [2024-07-15 06:48:27.933988] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:40.441 passed 00:19:40.441 Test: blockdev write read 8 blocks ...passed 00:19:40.441 Test: blockdev write read size > 128k ...passed 00:19:40.441 Test: blockdev write read invalid size ...passed 00:19:40.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.441 Test: blockdev write read max offset ...passed 00:19:40.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.699 Test: blockdev writev readv 8 blocks ...passed 00:19:40.699 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.699 Test: blockdev writev readv block ...passed 00:19:40.699 Test: blockdev writev readv size > 128k ...passed 00:19:40.699 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.699 Test: blockdev comparev and writev ...[2024-07-15 06:48:28.107852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.107896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.107922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.108330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.108355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.108377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.108394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.108788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.108813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.108834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.108851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.109247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.109271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.109292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.699 [2024-07-15 06:48:28.109308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.699 passed 00:19:40.699 Test: blockdev nvme passthru rw ...passed 00:19:40.699 Test: blockdev nvme passthru vendor specific ...[2024-07-15 06:48:28.191201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.699 [2024-07-15 06:48:28.191227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.191401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.699 [2024-07-15 06:48:28.191424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.191589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.699 [2024-07-15 06:48:28.191612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.699 [2024-07-15 06:48:28.191775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.699 [2024-07-15 06:48:28.191798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.699 passed 00:19:40.699 Test: blockdev nvme admin passthru ...passed 00:19:40.699 Test: blockdev copy ...passed 00:19:40.699 00:19:40.699 Run Summary: Type Total Ran Passed Failed Inactive 00:19:40.699 suites 1 1 n/a 0 0 00:19:40.699 tests 23 23 23 0 0 00:19:40.699 asserts 152 152 152 0 n/a 00:19:40.699 00:19:40.699 Elapsed time = 1.060 seconds 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.957 rmmod nvme_tcp 00:19:40.957 rmmod nvme_fabrics 00:19:40.957 rmmod nvme_keyring 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 648468 ']' 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 648468 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 648468 ']' 00:19:40.957 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 648468 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 648468 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 648468' 00:19:40.958 killing process with pid 648468 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 648468 00:19:40.958 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 648468 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.215 06:48:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.749 06:48:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.749 00:19:43.749 real 0m5.935s 00:19:43.749 user 0m9.254s 00:19:43.749 sys 0m1.942s 00:19:43.749 06:48:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:43.749 06:48:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:43.749 ************************************ 00:19:43.749 END TEST nvmf_bdevio 00:19:43.749 ************************************ 00:19:43.749 06:48:30 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.749 06:48:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:43.749 06:48:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:43.749 06:48:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.749 ************************************ 00:19:43.749 START TEST nvmf_auth_target 00:19:43.749 ************************************ 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.749 * Looking for test storage... 00:19:43.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.749 06:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:45.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:45.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:45.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:45.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.125 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:45.384 00:19:45.384 --- 10.0.0.2 ping statistics --- 00:19:45.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.384 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:45.384 00:19:45.384 --- 10.0.0.1 ping statistics --- 00:19:45.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.384 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=650690 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 650690 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 650690 ']' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.384 06:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.642 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:45.642 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:45.642 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.642 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.642 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=650719 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c1bd886b0eaf40f6390e777159fa10f28f70d84f80d69db4 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uhI 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c1bd886b0eaf40f6390e777159fa10f28f70d84f80d69db4 0 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c1bd886b0eaf40f6390e777159fa10f28f70d84f80d69db4 0 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c1bd886b0eaf40f6390e777159fa10f28f70d84f80d69db4 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uhI 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uhI 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uhI 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4e0b26e180b7cf1dbc88353486168005b2d289dce80550b8950577d59ead3214 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BpQ 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4e0b26e180b7cf1dbc88353486168005b2d289dce80550b8950577d59ead3214 3 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4e0b26e180b7cf1dbc88353486168005b2d289dce80550b8950577d59ead3214 3 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4e0b26e180b7cf1dbc88353486168005b2d289dce80550b8950577d59ead3214 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BpQ 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BpQ 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.BpQ 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.900 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40906e8dfdcb572bc303b9ca58cc0017 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1x9 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40906e8dfdcb572bc303b9ca58cc0017 1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40906e8dfdcb572bc303b9ca58cc0017 1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40906e8dfdcb572bc303b9ca58cc0017 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1x9 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1x9 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.1x9 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9f501a1c55dd87f984dcf7f9e65b225ae9928329e09b6226 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UEn 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9f501a1c55dd87f984dcf7f9e65b225ae9928329e09b6226 2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9f501a1c55dd87f984dcf7f9e65b225ae9928329e09b6226 2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9f501a1c55dd87f984dcf7f9e65b225ae9928329e09b6226 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UEn 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UEn 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.UEn 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5963af9700dc780eeb0695d52361e36df86b50ab8df599c 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GYh 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5963af9700dc780eeb0695d52361e36df86b50ab8df599c 2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5963af9700dc780eeb0695d52361e36df86b50ab8df599c 2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5963af9700dc780eeb0695d52361e36df86b50ab8df599c 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:45.901 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GYh 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GYh 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.GYh 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e8f7b6dc78c49a8b4bc2c7d2a11b06c2 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Dl0 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e8f7b6dc78c49a8b4bc2c7d2a11b06c2 1 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e8f7b6dc78c49a8b4bc2c7d2a11b06c2 1 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e8f7b6dc78c49a8b4bc2c7d2a11b06c2 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Dl0 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Dl0 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Dl0 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3aebf6e159a7ef8e689fc0118e9ce5499a3493bb565b05374331cdc3c6f26725 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DTD 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3aebf6e159a7ef8e689fc0118e9ce5499a3493bb565b05374331cdc3c6f26725 3 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3aebf6e159a7ef8e689fc0118e9ce5499a3493bb565b05374331cdc3c6f26725 3 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3aebf6e159a7ef8e689fc0118e9ce5499a3493bb565b05374331cdc3c6f26725 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DTD 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DTD 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.DTD 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 650690 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 650690 ']' 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.159 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 650719 /var/tmp/host.sock 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 650719 ']' 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.417 06:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uhI 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uhI 00:19:46.675 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uhI 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.BpQ ]] 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BpQ 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BpQ 00:19:46.933 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BpQ 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1x9 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.1x9 00:19:47.190 06:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.1x9 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.UEn ]] 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UEn 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UEn 00:19:47.448 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UEn 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GYh 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GYh 00:19:47.705 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GYh 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Dl0 ]] 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dl0 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dl0 00:19:47.962 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dl0 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DTD 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DTD 00:19:48.219 06:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DTD 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.477 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.735 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.994 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.254 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.512 { 00:19:49.512 "cntlid": 1, 00:19:49.512 "qid": 0, 00:19:49.512 "state": "enabled", 00:19:49.512 "listen_address": { 00:19:49.512 "trtype": "TCP", 00:19:49.512 "adrfam": "IPv4", 00:19:49.512 "traddr": "10.0.0.2", 00:19:49.512 "trsvcid": "4420" 00:19:49.512 }, 00:19:49.512 "peer_address": { 00:19:49.512 "trtype": "TCP", 00:19:49.512 "adrfam": "IPv4", 00:19:49.512 "traddr": "10.0.0.1", 00:19:49.512 "trsvcid": "59796" 00:19:49.512 }, 00:19:49.512 "auth": { 00:19:49.512 "state": "completed", 00:19:49.512 "digest": "sha256", 00:19:49.512 "dhgroup": "null" 00:19:49.512 } 00:19:49.512 } 00:19:49.512 ]' 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.512 06:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.770 06:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.707 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.965 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.224 00:19:51.224 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.224 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.224 06:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.537 { 00:19:51.537 "cntlid": 3, 00:19:51.537 "qid": 0, 00:19:51.537 "state": "enabled", 00:19:51.537 "listen_address": { 00:19:51.537 "trtype": "TCP", 00:19:51.537 "adrfam": "IPv4", 00:19:51.537 "traddr": "10.0.0.2", 00:19:51.537 "trsvcid": "4420" 00:19:51.537 }, 00:19:51.537 "peer_address": { 00:19:51.537 "trtype": "TCP", 00:19:51.537 "adrfam": "IPv4", 00:19:51.537 "traddr": "10.0.0.1", 00:19:51.537 "trsvcid": "59826" 00:19:51.537 }, 00:19:51.537 "auth": { 00:19:51.537 "state": "completed", 00:19:51.537 "digest": "sha256", 00:19:51.537 "dhgroup": "null" 00:19:51.537 } 00:19:51.537 } 00:19:51.537 ]' 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.537 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.794 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.794 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.794 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.794 06:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.730 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.989 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.556 00:19:53.556 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.556 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.556 06:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.556 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.556 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.556 06:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.556 06:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.814 { 00:19:53.814 "cntlid": 5, 00:19:53.814 "qid": 0, 00:19:53.814 "state": "enabled", 00:19:53.814 "listen_address": { 00:19:53.814 "trtype": "TCP", 00:19:53.814 "adrfam": "IPv4", 00:19:53.814 "traddr": "10.0.0.2", 00:19:53.814 "trsvcid": "4420" 00:19:53.814 }, 00:19:53.814 "peer_address": { 00:19:53.814 "trtype": "TCP", 00:19:53.814 "adrfam": "IPv4", 00:19:53.814 "traddr": "10.0.0.1", 00:19:53.814 "trsvcid": "59844" 00:19:53.814 }, 00:19:53.814 "auth": { 00:19:53.814 "state": "completed", 00:19:53.814 "digest": "sha256", 00:19:53.814 "dhgroup": "null" 00:19:53.814 } 00:19:53.814 } 00:19:53.814 ]' 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.814 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.070 06:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.007 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.265 06:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.523 00:19:55.524 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.524 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.524 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.782 { 00:19:55.782 "cntlid": 7, 00:19:55.782 "qid": 0, 00:19:55.782 "state": "enabled", 00:19:55.782 "listen_address": { 00:19:55.782 "trtype": "TCP", 00:19:55.782 "adrfam": "IPv4", 00:19:55.782 "traddr": "10.0.0.2", 00:19:55.782 "trsvcid": "4420" 00:19:55.782 }, 00:19:55.782 "peer_address": { 00:19:55.782 "trtype": "TCP", 00:19:55.782 "adrfam": "IPv4", 00:19:55.782 "traddr": "10.0.0.1", 00:19:55.782 "trsvcid": "46962" 00:19:55.782 }, 00:19:55.782 "auth": { 00:19:55.782 "state": "completed", 00:19:55.782 "digest": "sha256", 00:19:55.782 "dhgroup": "null" 00:19:55.782 } 00:19:55.782 } 00:19:55.782 ]' 00:19:55.782 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.040 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.298 06:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.233 06:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.491 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:57.491 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.492 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.061 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.061 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.319 { 00:19:58.319 "cntlid": 9, 00:19:58.319 "qid": 0, 00:19:58.319 "state": "enabled", 00:19:58.319 "listen_address": { 00:19:58.319 "trtype": "TCP", 00:19:58.319 "adrfam": "IPv4", 00:19:58.319 "traddr": "10.0.0.2", 00:19:58.319 "trsvcid": "4420" 00:19:58.319 }, 00:19:58.319 "peer_address": { 00:19:58.319 "trtype": "TCP", 00:19:58.319 "adrfam": "IPv4", 00:19:58.319 "traddr": "10.0.0.1", 00:19:58.319 "trsvcid": "46996" 00:19:58.319 }, 00:19:58.319 "auth": { 00:19:58.319 "state": "completed", 00:19:58.319 "digest": "sha256", 00:19:58.319 "dhgroup": "ffdhe2048" 00:19:58.319 } 00:19:58.319 } 00:19:58.319 ]' 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.319 06:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.576 06:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.507 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.764 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.021 00:20:00.021 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.021 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.021 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.278 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.278 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.278 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.278 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.534 { 00:20:00.534 "cntlid": 11, 00:20:00.534 "qid": 0, 00:20:00.534 "state": "enabled", 00:20:00.534 "listen_address": { 00:20:00.534 "trtype": "TCP", 00:20:00.534 "adrfam": "IPv4", 00:20:00.534 "traddr": "10.0.0.2", 00:20:00.534 "trsvcid": "4420" 00:20:00.534 }, 00:20:00.534 "peer_address": { 00:20:00.534 "trtype": "TCP", 00:20:00.534 "adrfam": "IPv4", 00:20:00.534 "traddr": "10.0.0.1", 00:20:00.534 "trsvcid": "47024" 00:20:00.534 }, 00:20:00.534 "auth": { 00:20:00.534 "state": "completed", 00:20:00.534 "digest": "sha256", 00:20:00.534 "dhgroup": "ffdhe2048" 00:20:00.534 } 00:20:00.534 } 00:20:00.534 ]' 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.534 06:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.791 06:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.725 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.983 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.240 00:20:02.240 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.240 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.240 06:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.496 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.496 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.497 06:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.497 06:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.497 06:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.497 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.497 { 00:20:02.497 "cntlid": 13, 00:20:02.497 "qid": 0, 00:20:02.497 "state": "enabled", 00:20:02.497 "listen_address": { 00:20:02.497 "trtype": "TCP", 00:20:02.497 "adrfam": "IPv4", 00:20:02.497 "traddr": "10.0.0.2", 00:20:02.497 "trsvcid": "4420" 00:20:02.497 }, 00:20:02.497 "peer_address": { 00:20:02.497 "trtype": "TCP", 00:20:02.497 "adrfam": "IPv4", 00:20:02.497 "traddr": "10.0.0.1", 00:20:02.497 "trsvcid": "47036" 00:20:02.497 }, 00:20:02.497 "auth": { 00:20:02.497 "state": "completed", 00:20:02.497 "digest": "sha256", 00:20:02.497 "dhgroup": "ffdhe2048" 00:20:02.497 } 00:20:02.497 } 00:20:02.497 ]' 00:20:02.497 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.753 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.010 06:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.943 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.201 06:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.458 00:20:04.458 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.458 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.458 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.716 { 00:20:04.716 "cntlid": 15, 00:20:04.716 "qid": 0, 00:20:04.716 "state": "enabled", 00:20:04.716 "listen_address": { 00:20:04.716 "trtype": "TCP", 00:20:04.716 "adrfam": "IPv4", 00:20:04.716 "traddr": "10.0.0.2", 00:20:04.716 "trsvcid": "4420" 00:20:04.716 }, 00:20:04.716 "peer_address": { 00:20:04.716 "trtype": "TCP", 00:20:04.716 "adrfam": "IPv4", 00:20:04.716 "traddr": "10.0.0.1", 00:20:04.716 "trsvcid": "46866" 00:20:04.716 }, 00:20:04.716 "auth": { 00:20:04.716 "state": "completed", 00:20:04.716 "digest": "sha256", 00:20:04.716 "dhgroup": "ffdhe2048" 00:20:04.716 } 00:20:04.716 } 00:20:04.716 ]' 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.716 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.978 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.978 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.978 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.978 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.978 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.311 06:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.246 06:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.811 00:20:06.811 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.811 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.811 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.070 { 00:20:07.070 "cntlid": 17, 00:20:07.070 "qid": 0, 00:20:07.070 "state": "enabled", 00:20:07.070 "listen_address": { 00:20:07.070 "trtype": "TCP", 00:20:07.070 "adrfam": "IPv4", 00:20:07.070 "traddr": "10.0.0.2", 00:20:07.070 "trsvcid": "4420" 00:20:07.070 }, 00:20:07.070 "peer_address": { 00:20:07.070 "trtype": "TCP", 00:20:07.070 "adrfam": "IPv4", 00:20:07.070 "traddr": "10.0.0.1", 00:20:07.070 "trsvcid": "46908" 00:20:07.070 }, 00:20:07.070 "auth": { 00:20:07.070 "state": "completed", 00:20:07.070 "digest": "sha256", 00:20:07.070 "dhgroup": "ffdhe3072" 00:20:07.070 } 00:20:07.070 } 00:20:07.070 ]' 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.070 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.640 06:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.576 06:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.835 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.094 00:20:09.094 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.094 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.094 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.353 { 00:20:09.353 "cntlid": 19, 00:20:09.353 "qid": 0, 00:20:09.353 "state": "enabled", 00:20:09.353 "listen_address": { 00:20:09.353 "trtype": "TCP", 00:20:09.353 "adrfam": "IPv4", 00:20:09.353 "traddr": "10.0.0.2", 00:20:09.353 "trsvcid": "4420" 00:20:09.353 }, 00:20:09.353 "peer_address": { 00:20:09.353 "trtype": "TCP", 00:20:09.353 "adrfam": "IPv4", 00:20:09.353 "traddr": "10.0.0.1", 00:20:09.353 "trsvcid": "46940" 00:20:09.353 }, 00:20:09.353 "auth": { 00:20:09.353 "state": "completed", 00:20:09.353 "digest": "sha256", 00:20:09.353 "dhgroup": "ffdhe3072" 00:20:09.353 } 00:20:09.353 } 00:20:09.353 ]' 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.353 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.354 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.354 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.354 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.354 06:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.613 06:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.990 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.248 00:20:11.248 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.248 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.248 06:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.506 { 00:20:11.506 "cntlid": 21, 00:20:11.506 "qid": 0, 00:20:11.506 "state": "enabled", 00:20:11.506 "listen_address": { 00:20:11.506 "trtype": "TCP", 00:20:11.506 "adrfam": "IPv4", 00:20:11.506 "traddr": "10.0.0.2", 00:20:11.506 "trsvcid": "4420" 00:20:11.506 }, 00:20:11.506 "peer_address": { 00:20:11.506 "trtype": "TCP", 00:20:11.506 "adrfam": "IPv4", 00:20:11.506 "traddr": "10.0.0.1", 00:20:11.506 "trsvcid": "46948" 00:20:11.506 }, 00:20:11.506 "auth": { 00:20:11.506 "state": "completed", 00:20:11.506 "digest": "sha256", 00:20:11.506 "dhgroup": "ffdhe3072" 00:20:11.506 } 00:20:11.506 } 00:20:11.506 ]' 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.506 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.764 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.764 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.764 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.764 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.764 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.021 06:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.956 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.213 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.471 00:20:13.471 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.471 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.471 06:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.729 { 00:20:13.729 "cntlid": 23, 00:20:13.729 "qid": 0, 00:20:13.729 "state": "enabled", 00:20:13.729 "listen_address": { 00:20:13.729 "trtype": "TCP", 00:20:13.729 "adrfam": "IPv4", 00:20:13.729 "traddr": "10.0.0.2", 00:20:13.729 "trsvcid": "4420" 00:20:13.729 }, 00:20:13.729 "peer_address": { 00:20:13.729 "trtype": "TCP", 00:20:13.729 "adrfam": "IPv4", 00:20:13.729 "traddr": "10.0.0.1", 00:20:13.729 "trsvcid": "46974" 00:20:13.729 }, 00:20:13.729 "auth": { 00:20:13.729 "state": "completed", 00:20:13.729 "digest": "sha256", 00:20:13.729 "dhgroup": "ffdhe3072" 00:20:13.729 } 00:20:13.729 } 00:20:13.729 ]' 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.729 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.987 06:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.364 06:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.929 00:20:15.929 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.929 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.929 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.185 { 00:20:16.185 "cntlid": 25, 00:20:16.185 "qid": 0, 00:20:16.185 "state": "enabled", 00:20:16.185 "listen_address": { 00:20:16.185 "trtype": "TCP", 00:20:16.185 "adrfam": "IPv4", 00:20:16.185 "traddr": "10.0.0.2", 00:20:16.185 "trsvcid": "4420" 00:20:16.185 }, 00:20:16.185 "peer_address": { 00:20:16.185 "trtype": "TCP", 00:20:16.185 "adrfam": "IPv4", 00:20:16.185 "traddr": "10.0.0.1", 00:20:16.185 "trsvcid": "34974" 00:20:16.185 }, 00:20:16.185 "auth": { 00:20:16.185 "state": "completed", 00:20:16.185 "digest": "sha256", 00:20:16.185 "dhgroup": "ffdhe4096" 00:20:16.185 } 00:20:16.185 } 00:20:16.185 ]' 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.185 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.442 06:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.376 06:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.634 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.201 00:20:18.201 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.201 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.201 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.459 { 00:20:18.459 "cntlid": 27, 00:20:18.459 "qid": 0, 00:20:18.459 "state": "enabled", 00:20:18.459 "listen_address": { 00:20:18.459 "trtype": "TCP", 00:20:18.459 "adrfam": "IPv4", 00:20:18.459 "traddr": "10.0.0.2", 00:20:18.459 "trsvcid": "4420" 00:20:18.459 }, 00:20:18.459 "peer_address": { 00:20:18.459 "trtype": "TCP", 00:20:18.459 "adrfam": "IPv4", 00:20:18.459 "traddr": "10.0.0.1", 00:20:18.459 "trsvcid": "35000" 00:20:18.459 }, 00:20:18.459 "auth": { 00:20:18.459 "state": "completed", 00:20:18.459 "digest": "sha256", 00:20:18.459 "dhgroup": "ffdhe4096" 00:20:18.459 } 00:20:18.459 } 00:20:18.459 ]' 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.459 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.460 06:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.460 06:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.460 06:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.460 06:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.717 06:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.742 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.001 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.259 00:20:20.517 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.517 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.517 06:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.517 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.775 { 00:20:20.775 "cntlid": 29, 00:20:20.775 "qid": 0, 00:20:20.775 "state": "enabled", 00:20:20.775 "listen_address": { 00:20:20.775 "trtype": "TCP", 00:20:20.775 "adrfam": "IPv4", 00:20:20.775 "traddr": "10.0.0.2", 00:20:20.775 "trsvcid": "4420" 00:20:20.775 }, 00:20:20.775 "peer_address": { 00:20:20.775 "trtype": "TCP", 00:20:20.775 "adrfam": "IPv4", 00:20:20.775 "traddr": "10.0.0.1", 00:20:20.775 "trsvcid": "35020" 00:20:20.775 }, 00:20:20.775 "auth": { 00:20:20.775 "state": "completed", 00:20:20.775 "digest": "sha256", 00:20:20.775 "dhgroup": "ffdhe4096" 00:20:20.775 } 00:20:20.775 } 00:20:20.775 ]' 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.775 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.032 06:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.966 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.223 06:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.789 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.789 { 00:20:22.789 "cntlid": 31, 00:20:22.789 "qid": 0, 00:20:22.789 "state": "enabled", 00:20:22.789 "listen_address": { 00:20:22.789 "trtype": "TCP", 00:20:22.789 "adrfam": "IPv4", 00:20:22.789 "traddr": "10.0.0.2", 00:20:22.789 "trsvcid": "4420" 00:20:22.789 }, 00:20:22.789 "peer_address": { 00:20:22.789 "trtype": "TCP", 00:20:22.789 "adrfam": "IPv4", 00:20:22.789 "traddr": "10.0.0.1", 00:20:22.789 "trsvcid": "35064" 00:20:22.789 }, 00:20:22.789 "auth": { 00:20:22.789 "state": "completed", 00:20:22.789 "digest": "sha256", 00:20:22.789 "dhgroup": "ffdhe4096" 00:20:22.789 } 00:20:22.789 } 00:20:22.789 ]' 00:20:22.789 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.047 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.304 06:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.236 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.492 06:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.057 00:20:25.057 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.057 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.057 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.314 { 00:20:25.314 "cntlid": 33, 00:20:25.314 "qid": 0, 00:20:25.314 "state": "enabled", 00:20:25.314 "listen_address": { 00:20:25.314 "trtype": "TCP", 00:20:25.314 "adrfam": "IPv4", 00:20:25.314 "traddr": "10.0.0.2", 00:20:25.314 "trsvcid": "4420" 00:20:25.314 }, 00:20:25.314 "peer_address": { 00:20:25.314 "trtype": "TCP", 00:20:25.314 "adrfam": "IPv4", 00:20:25.314 "traddr": "10.0.0.1", 00:20:25.314 "trsvcid": "53852" 00:20:25.314 }, 00:20:25.314 "auth": { 00:20:25.314 "state": "completed", 00:20:25.314 "digest": "sha256", 00:20:25.314 "dhgroup": "ffdhe6144" 00:20:25.314 } 00:20:25.314 } 00:20:25.314 ]' 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.314 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.315 06:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.574 06:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.507 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.763 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:26.763 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.763 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.763 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.764 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.328 00:20:27.328 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.328 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.328 06:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.586 { 00:20:27.586 "cntlid": 35, 00:20:27.586 "qid": 0, 00:20:27.586 "state": "enabled", 00:20:27.586 "listen_address": { 00:20:27.586 "trtype": "TCP", 00:20:27.586 "adrfam": "IPv4", 00:20:27.586 "traddr": "10.0.0.2", 00:20:27.586 "trsvcid": "4420" 00:20:27.586 }, 00:20:27.586 "peer_address": { 00:20:27.586 "trtype": "TCP", 00:20:27.586 "adrfam": "IPv4", 00:20:27.586 "traddr": "10.0.0.1", 00:20:27.586 "trsvcid": "53888" 00:20:27.586 }, 00:20:27.586 "auth": { 00:20:27.586 "state": "completed", 00:20:27.586 "digest": "sha256", 00:20:27.586 "dhgroup": "ffdhe6144" 00:20:27.586 } 00:20:27.586 } 00:20:27.586 ]' 00:20:27.586 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.844 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.102 06:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.037 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.295 06:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.862 00:20:29.862 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.862 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.862 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.120 { 00:20:30.120 "cntlid": 37, 00:20:30.120 "qid": 0, 00:20:30.120 "state": "enabled", 00:20:30.120 "listen_address": { 00:20:30.120 "trtype": "TCP", 00:20:30.120 "adrfam": "IPv4", 00:20:30.120 "traddr": "10.0.0.2", 00:20:30.120 "trsvcid": "4420" 00:20:30.120 }, 00:20:30.120 "peer_address": { 00:20:30.120 "trtype": "TCP", 00:20:30.120 "adrfam": "IPv4", 00:20:30.120 "traddr": "10.0.0.1", 00:20:30.120 "trsvcid": "53928" 00:20:30.120 }, 00:20:30.120 "auth": { 00:20:30.120 "state": "completed", 00:20:30.120 "digest": "sha256", 00:20:30.120 "dhgroup": "ffdhe6144" 00:20:30.120 } 00:20:30.120 } 00:20:30.120 ]' 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.120 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.379 06:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.315 06:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.882 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.141 00:20:32.141 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.141 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.141 06:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.710 { 00:20:32.710 "cntlid": 39, 00:20:32.710 "qid": 0, 00:20:32.710 "state": "enabled", 00:20:32.710 "listen_address": { 00:20:32.710 "trtype": "TCP", 00:20:32.710 "adrfam": "IPv4", 00:20:32.710 "traddr": "10.0.0.2", 00:20:32.710 "trsvcid": "4420" 00:20:32.710 }, 00:20:32.710 "peer_address": { 00:20:32.710 "trtype": "TCP", 00:20:32.710 "adrfam": "IPv4", 00:20:32.710 "traddr": "10.0.0.1", 00:20:32.710 "trsvcid": "53962" 00:20:32.710 }, 00:20:32.710 "auth": { 00:20:32.710 "state": "completed", 00:20:32.710 "digest": "sha256", 00:20:32.710 "dhgroup": "ffdhe6144" 00:20:32.710 } 00:20:32.710 } 00:20:32.710 ]' 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.710 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.711 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.711 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.970 06:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.933 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.192 06:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.131 00:20:35.131 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.131 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.131 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.390 { 00:20:35.390 "cntlid": 41, 00:20:35.390 "qid": 0, 00:20:35.390 "state": "enabled", 00:20:35.390 "listen_address": { 00:20:35.390 "trtype": "TCP", 00:20:35.390 "adrfam": "IPv4", 00:20:35.390 "traddr": "10.0.0.2", 00:20:35.390 "trsvcid": "4420" 00:20:35.390 }, 00:20:35.390 "peer_address": { 00:20:35.390 "trtype": "TCP", 00:20:35.390 "adrfam": "IPv4", 00:20:35.390 "traddr": "10.0.0.1", 00:20:35.390 "trsvcid": "41652" 00:20:35.390 }, 00:20:35.390 "auth": { 00:20:35.390 "state": "completed", 00:20:35.390 "digest": "sha256", 00:20:35.390 "dhgroup": "ffdhe8192" 00:20:35.390 } 00:20:35.390 } 00:20:35.390 ]' 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.390 06:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.650 06:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.584 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.842 06:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.780 00:20:37.780 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.780 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.780 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.038 { 00:20:38.038 "cntlid": 43, 00:20:38.038 "qid": 0, 00:20:38.038 "state": "enabled", 00:20:38.038 "listen_address": { 00:20:38.038 "trtype": "TCP", 00:20:38.038 "adrfam": "IPv4", 00:20:38.038 "traddr": "10.0.0.2", 00:20:38.038 "trsvcid": "4420" 00:20:38.038 }, 00:20:38.038 "peer_address": { 00:20:38.038 "trtype": "TCP", 00:20:38.038 "adrfam": "IPv4", 00:20:38.038 "traddr": "10.0.0.1", 00:20:38.038 "trsvcid": "41686" 00:20:38.038 }, 00:20:38.038 "auth": { 00:20:38.038 "state": "completed", 00:20:38.038 "digest": "sha256", 00:20:38.038 "dhgroup": "ffdhe8192" 00:20:38.038 } 00:20:38.038 } 00:20:38.038 ]' 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.038 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.297 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.298 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.298 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.556 06:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.491 06:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.749 06:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.686 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.686 06:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.945 { 00:20:40.945 "cntlid": 45, 00:20:40.945 "qid": 0, 00:20:40.945 "state": "enabled", 00:20:40.945 "listen_address": { 00:20:40.945 "trtype": "TCP", 00:20:40.945 "adrfam": "IPv4", 00:20:40.945 "traddr": "10.0.0.2", 00:20:40.945 "trsvcid": "4420" 00:20:40.945 }, 00:20:40.945 "peer_address": { 00:20:40.945 "trtype": "TCP", 00:20:40.945 "adrfam": "IPv4", 00:20:40.945 "traddr": "10.0.0.1", 00:20:40.945 "trsvcid": "41720" 00:20:40.945 }, 00:20:40.945 "auth": { 00:20:40.945 "state": "completed", 00:20:40.945 "digest": "sha256", 00:20:40.945 "dhgroup": "ffdhe8192" 00:20:40.945 } 00:20:40.945 } 00:20:40.945 ]' 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.945 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.203 06:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.141 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.400 06:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.338 00:20:43.338 06:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.338 06:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.338 06:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.595 { 00:20:43.595 "cntlid": 47, 00:20:43.595 "qid": 0, 00:20:43.595 "state": "enabled", 00:20:43.595 "listen_address": { 00:20:43.595 "trtype": "TCP", 00:20:43.595 "adrfam": "IPv4", 00:20:43.595 "traddr": "10.0.0.2", 00:20:43.595 "trsvcid": "4420" 00:20:43.595 }, 00:20:43.595 "peer_address": { 00:20:43.595 "trtype": "TCP", 00:20:43.595 "adrfam": "IPv4", 00:20:43.595 "traddr": "10.0.0.1", 00:20:43.595 "trsvcid": "41760" 00:20:43.595 }, 00:20:43.595 "auth": { 00:20:43.595 "state": "completed", 00:20:43.595 "digest": "sha256", 00:20:43.595 "dhgroup": "ffdhe8192" 00:20:43.595 } 00:20:43.595 } 00:20:43.595 ]' 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.595 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.853 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.853 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.853 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.853 06:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.226 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.227 06:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.484 00:20:45.484 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.484 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.484 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.741 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.741 { 00:20:45.741 "cntlid": 49, 00:20:45.741 "qid": 0, 00:20:45.741 "state": "enabled", 00:20:45.741 "listen_address": { 00:20:45.741 "trtype": "TCP", 00:20:45.741 "adrfam": "IPv4", 00:20:45.741 "traddr": "10.0.0.2", 00:20:45.741 "trsvcid": "4420" 00:20:45.742 }, 00:20:45.742 "peer_address": { 00:20:45.742 "trtype": "TCP", 00:20:45.742 "adrfam": "IPv4", 00:20:45.742 "traddr": "10.0.0.1", 00:20:45.742 "trsvcid": "51636" 00:20:45.742 }, 00:20:45.742 "auth": { 00:20:45.742 "state": "completed", 00:20:45.742 "digest": "sha384", 00:20:45.742 "dhgroup": "null" 00:20:45.742 } 00:20:45.742 } 00:20:45.742 ]' 00:20:45.742 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.742 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.742 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.742 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:45.742 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.001 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.001 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.001 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.258 06:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.189 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.447 06:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.738 00:20:47.738 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.738 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.738 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.996 { 00:20:47.996 "cntlid": 51, 00:20:47.996 "qid": 0, 00:20:47.996 "state": "enabled", 00:20:47.996 "listen_address": { 00:20:47.996 "trtype": "TCP", 00:20:47.996 "adrfam": "IPv4", 00:20:47.996 "traddr": "10.0.0.2", 00:20:47.996 "trsvcid": "4420" 00:20:47.996 }, 00:20:47.996 "peer_address": { 00:20:47.996 "trtype": "TCP", 00:20:47.996 "adrfam": "IPv4", 00:20:47.996 "traddr": "10.0.0.1", 00:20:47.996 "trsvcid": "51664" 00:20:47.996 }, 00:20:47.996 "auth": { 00:20:47.996 "state": "completed", 00:20:47.996 "digest": "sha384", 00:20:47.996 "dhgroup": "null" 00:20:47.996 } 00:20:47.996 } 00:20:47.996 ]' 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.996 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:47.997 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.997 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.997 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.997 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.253 06:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.185 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.442 06:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.699 00:20:49.699 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.699 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.699 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.957 { 00:20:49.957 "cntlid": 53, 00:20:49.957 "qid": 0, 00:20:49.957 "state": "enabled", 00:20:49.957 "listen_address": { 00:20:49.957 "trtype": "TCP", 00:20:49.957 "adrfam": "IPv4", 00:20:49.957 "traddr": "10.0.0.2", 00:20:49.957 "trsvcid": "4420" 00:20:49.957 }, 00:20:49.957 "peer_address": { 00:20:49.957 "trtype": "TCP", 00:20:49.957 "adrfam": "IPv4", 00:20:49.957 "traddr": "10.0.0.1", 00:20:49.957 "trsvcid": "51684" 00:20:49.957 }, 00:20:49.957 "auth": { 00:20:49.957 "state": "completed", 00:20:49.957 "digest": "sha384", 00:20:49.957 "dhgroup": "null" 00:20:49.957 } 00:20:49.957 } 00:20:49.957 ]' 00:20:49.957 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.214 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.471 06:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.404 06:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.662 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.919 00:20:51.919 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.919 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.919 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.177 { 00:20:52.177 "cntlid": 55, 00:20:52.177 "qid": 0, 00:20:52.177 "state": "enabled", 00:20:52.177 "listen_address": { 00:20:52.177 "trtype": "TCP", 00:20:52.177 "adrfam": "IPv4", 00:20:52.177 "traddr": "10.0.0.2", 00:20:52.177 "trsvcid": "4420" 00:20:52.177 }, 00:20:52.177 "peer_address": { 00:20:52.177 "trtype": "TCP", 00:20:52.177 "adrfam": "IPv4", 00:20:52.177 "traddr": "10.0.0.1", 00:20:52.177 "trsvcid": "51714" 00:20:52.177 }, 00:20:52.177 "auth": { 00:20:52.177 "state": "completed", 00:20:52.177 "digest": "sha384", 00:20:52.177 "dhgroup": "null" 00:20:52.177 } 00:20:52.177 } 00:20:52.177 ]' 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.177 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.438 06:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.375 06:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.633 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.200 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.200 { 00:20:54.200 "cntlid": 57, 00:20:54.200 "qid": 0, 00:20:54.200 "state": "enabled", 00:20:54.200 "listen_address": { 00:20:54.200 "trtype": "TCP", 00:20:54.200 "adrfam": "IPv4", 00:20:54.200 "traddr": "10.0.0.2", 00:20:54.200 "trsvcid": "4420" 00:20:54.200 }, 00:20:54.200 "peer_address": { 00:20:54.200 "trtype": "TCP", 00:20:54.200 "adrfam": "IPv4", 00:20:54.200 "traddr": "10.0.0.1", 00:20:54.200 "trsvcid": "50768" 00:20:54.200 }, 00:20:54.200 "auth": { 00:20:54.200 "state": "completed", 00:20:54.200 "digest": "sha384", 00:20:54.200 "dhgroup": "ffdhe2048" 00:20:54.200 } 00:20:54.200 } 00:20:54.200 ]' 00:20:54.200 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.458 06:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.716 06:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.653 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.910 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.167 00:20:56.167 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.167 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.167 06:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.425 { 00:20:56.425 "cntlid": 59, 00:20:56.425 "qid": 0, 00:20:56.425 "state": "enabled", 00:20:56.425 "listen_address": { 00:20:56.425 "trtype": "TCP", 00:20:56.425 "adrfam": "IPv4", 00:20:56.425 "traddr": "10.0.0.2", 00:20:56.425 "trsvcid": "4420" 00:20:56.425 }, 00:20:56.425 "peer_address": { 00:20:56.425 "trtype": "TCP", 00:20:56.425 "adrfam": "IPv4", 00:20:56.425 "traddr": "10.0.0.1", 00:20:56.425 "trsvcid": "50802" 00:20:56.425 }, 00:20:56.425 "auth": { 00:20:56.425 "state": "completed", 00:20:56.425 "digest": "sha384", 00:20:56.425 "dhgroup": "ffdhe2048" 00:20:56.425 } 00:20:56.425 } 00:20:56.425 ]' 00:20:56.425 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.683 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.941 06:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.876 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.134 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.392 00:20:58.392 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.392 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.392 06:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.651 { 00:20:58.651 "cntlid": 61, 00:20:58.651 "qid": 0, 00:20:58.651 "state": "enabled", 00:20:58.651 "listen_address": { 00:20:58.651 "trtype": "TCP", 00:20:58.651 "adrfam": "IPv4", 00:20:58.651 "traddr": "10.0.0.2", 00:20:58.651 "trsvcid": "4420" 00:20:58.651 }, 00:20:58.651 "peer_address": { 00:20:58.651 "trtype": "TCP", 00:20:58.651 "adrfam": "IPv4", 00:20:58.651 "traddr": "10.0.0.1", 00:20:58.651 "trsvcid": "50818" 00:20:58.651 }, 00:20:58.651 "auth": { 00:20:58.651 "state": "completed", 00:20:58.651 "digest": "sha384", 00:20:58.651 "dhgroup": "ffdhe2048" 00:20:58.651 } 00:20:58.651 } 00:20:58.651 ]' 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.651 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.911 06:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.289 06:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.547 00:21:00.547 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.547 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.547 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.805 { 00:21:00.805 "cntlid": 63, 00:21:00.805 "qid": 0, 00:21:00.805 "state": "enabled", 00:21:00.805 "listen_address": { 00:21:00.805 "trtype": "TCP", 00:21:00.805 "adrfam": "IPv4", 00:21:00.805 "traddr": "10.0.0.2", 00:21:00.805 "trsvcid": "4420" 00:21:00.805 }, 00:21:00.805 "peer_address": { 00:21:00.805 "trtype": "TCP", 00:21:00.805 "adrfam": "IPv4", 00:21:00.805 "traddr": "10.0.0.1", 00:21:00.805 "trsvcid": "50834" 00:21:00.805 }, 00:21:00.805 "auth": { 00:21:00.805 "state": "completed", 00:21:00.805 "digest": "sha384", 00:21:00.805 "dhgroup": "ffdhe2048" 00:21:00.805 } 00:21:00.805 } 00:21:00.805 ]' 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.805 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.063 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.063 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.063 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.063 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.063 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.321 06:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.291 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.550 06:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.808 00:21:02.808 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.808 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.808 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.066 { 00:21:03.066 "cntlid": 65, 00:21:03.066 "qid": 0, 00:21:03.066 "state": "enabled", 00:21:03.066 "listen_address": { 00:21:03.066 "trtype": "TCP", 00:21:03.066 "adrfam": "IPv4", 00:21:03.066 "traddr": "10.0.0.2", 00:21:03.066 "trsvcid": "4420" 00:21:03.066 }, 00:21:03.066 "peer_address": { 00:21:03.066 "trtype": "TCP", 00:21:03.066 "adrfam": "IPv4", 00:21:03.066 "traddr": "10.0.0.1", 00:21:03.066 "trsvcid": "50858" 00:21:03.066 }, 00:21:03.066 "auth": { 00:21:03.066 "state": "completed", 00:21:03.066 "digest": "sha384", 00:21:03.066 "dhgroup": "ffdhe3072" 00:21:03.066 } 00:21:03.066 } 00:21:03.066 ]' 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.066 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.324 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.324 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.324 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.324 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.324 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.582 06:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.518 06:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.776 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.034 00:21:05.034 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.034 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.034 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.292 { 00:21:05.292 "cntlid": 67, 00:21:05.292 "qid": 0, 00:21:05.292 "state": "enabled", 00:21:05.292 "listen_address": { 00:21:05.292 "trtype": "TCP", 00:21:05.292 "adrfam": "IPv4", 00:21:05.292 "traddr": "10.0.0.2", 00:21:05.292 "trsvcid": "4420" 00:21:05.292 }, 00:21:05.292 "peer_address": { 00:21:05.292 "trtype": "TCP", 00:21:05.292 "adrfam": "IPv4", 00:21:05.292 "traddr": "10.0.0.1", 00:21:05.292 "trsvcid": "33102" 00:21:05.292 }, 00:21:05.292 "auth": { 00:21:05.292 "state": "completed", 00:21:05.292 "digest": "sha384", 00:21:05.292 "dhgroup": "ffdhe3072" 00:21:05.292 } 00:21:05.292 } 00:21:05.292 ]' 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.292 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.550 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.550 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.550 06:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.808 06:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.743 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.001 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.002 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.258 00:21:07.258 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.258 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.258 06:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.515 { 00:21:07.515 "cntlid": 69, 00:21:07.515 "qid": 0, 00:21:07.515 "state": "enabled", 00:21:07.515 "listen_address": { 00:21:07.515 "trtype": "TCP", 00:21:07.515 "adrfam": "IPv4", 00:21:07.515 "traddr": "10.0.0.2", 00:21:07.515 "trsvcid": "4420" 00:21:07.515 }, 00:21:07.515 "peer_address": { 00:21:07.515 "trtype": "TCP", 00:21:07.515 "adrfam": "IPv4", 00:21:07.515 "traddr": "10.0.0.1", 00:21:07.515 "trsvcid": "33124" 00:21:07.515 }, 00:21:07.515 "auth": { 00:21:07.515 "state": "completed", 00:21:07.515 "digest": "sha384", 00:21:07.515 "dhgroup": "ffdhe3072" 00:21:07.515 } 00:21:07.515 } 00:21:07.515 ]' 00:21:07.515 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.772 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.029 06:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.964 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.222 06:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.480 00:21:09.480 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.480 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.480 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.738 { 00:21:09.738 "cntlid": 71, 00:21:09.738 "qid": 0, 00:21:09.738 "state": "enabled", 00:21:09.738 "listen_address": { 00:21:09.738 "trtype": "TCP", 00:21:09.738 "adrfam": "IPv4", 00:21:09.738 "traddr": "10.0.0.2", 00:21:09.738 "trsvcid": "4420" 00:21:09.738 }, 00:21:09.738 "peer_address": { 00:21:09.738 "trtype": "TCP", 00:21:09.738 "adrfam": "IPv4", 00:21:09.738 "traddr": "10.0.0.1", 00:21:09.738 "trsvcid": "33152" 00:21:09.738 }, 00:21:09.738 "auth": { 00:21:09.738 "state": "completed", 00:21:09.738 "digest": "sha384", 00:21:09.738 "dhgroup": "ffdhe3072" 00:21:09.738 } 00:21:09.738 } 00:21:09.738 ]' 00:21:09.738 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.996 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.253 06:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.188 06:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.446 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.014 00:21:12.014 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.014 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.014 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.272 { 00:21:12.272 "cntlid": 73, 00:21:12.272 "qid": 0, 00:21:12.272 "state": "enabled", 00:21:12.272 "listen_address": { 00:21:12.272 "trtype": "TCP", 00:21:12.272 "adrfam": "IPv4", 00:21:12.272 "traddr": "10.0.0.2", 00:21:12.272 "trsvcid": "4420" 00:21:12.272 }, 00:21:12.272 "peer_address": { 00:21:12.272 "trtype": "TCP", 00:21:12.272 "adrfam": "IPv4", 00:21:12.272 "traddr": "10.0.0.1", 00:21:12.272 "trsvcid": "33186" 00:21:12.272 }, 00:21:12.272 "auth": { 00:21:12.272 "state": "completed", 00:21:12.272 "digest": "sha384", 00:21:12.272 "dhgroup": "ffdhe4096" 00:21:12.272 } 00:21:12.272 } 00:21:12.272 ]' 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.272 06:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.532 06:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:13.470 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.729 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.987 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.988 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.246 00:21:14.246 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.246 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.246 06:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.503 { 00:21:14.503 "cntlid": 75, 00:21:14.503 "qid": 0, 00:21:14.503 "state": "enabled", 00:21:14.503 "listen_address": { 00:21:14.503 "trtype": "TCP", 00:21:14.503 "adrfam": "IPv4", 00:21:14.503 "traddr": "10.0.0.2", 00:21:14.503 "trsvcid": "4420" 00:21:14.503 }, 00:21:14.503 "peer_address": { 00:21:14.503 "trtype": "TCP", 00:21:14.503 "adrfam": "IPv4", 00:21:14.503 "traddr": "10.0.0.1", 00:21:14.503 "trsvcid": "39894" 00:21:14.503 }, 00:21:14.503 "auth": { 00:21:14.503 "state": "completed", 00:21:14.503 "digest": "sha384", 00:21:14.503 "dhgroup": "ffdhe4096" 00:21:14.503 } 00:21:14.503 } 00:21:14.503 ]' 00:21:14.503 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.761 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.020 06:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.990 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.247 06:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.813 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.813 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.813 { 00:21:16.813 "cntlid": 77, 00:21:16.813 "qid": 0, 00:21:16.813 "state": "enabled", 00:21:16.813 "listen_address": { 00:21:16.813 "trtype": "TCP", 00:21:16.814 "adrfam": "IPv4", 00:21:16.814 "traddr": "10.0.0.2", 00:21:16.814 "trsvcid": "4420" 00:21:16.814 }, 00:21:16.814 "peer_address": { 00:21:16.814 "trtype": "TCP", 00:21:16.814 "adrfam": "IPv4", 00:21:16.814 "traddr": "10.0.0.1", 00:21:16.814 "trsvcid": "39930" 00:21:16.814 }, 00:21:16.814 "auth": { 00:21:16.814 "state": "completed", 00:21:16.814 "digest": "sha384", 00:21:16.814 "dhgroup": "ffdhe4096" 00:21:16.814 } 00:21:16.814 } 00:21:16.814 ]' 00:21:16.814 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.071 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.329 06:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.291 06:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.549 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.807 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.066 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.324 { 00:21:19.324 "cntlid": 79, 00:21:19.324 "qid": 0, 00:21:19.324 "state": "enabled", 00:21:19.324 "listen_address": { 00:21:19.324 "trtype": "TCP", 00:21:19.324 "adrfam": "IPv4", 00:21:19.324 "traddr": "10.0.0.2", 00:21:19.324 "trsvcid": "4420" 00:21:19.324 }, 00:21:19.324 "peer_address": { 00:21:19.324 "trtype": "TCP", 00:21:19.324 "adrfam": "IPv4", 00:21:19.324 "traddr": "10.0.0.1", 00:21:19.324 "trsvcid": "39966" 00:21:19.324 }, 00:21:19.324 "auth": { 00:21:19.324 "state": "completed", 00:21:19.324 "digest": "sha384", 00:21:19.324 "dhgroup": "ffdhe4096" 00:21:19.324 } 00:21:19.324 } 00:21:19.324 ]' 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.324 06:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.582 06:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.520 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.777 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.344 00:21:21.344 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.344 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.344 06:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.602 { 00:21:21.602 "cntlid": 81, 00:21:21.602 "qid": 0, 00:21:21.602 "state": "enabled", 00:21:21.602 "listen_address": { 00:21:21.602 "trtype": "TCP", 00:21:21.602 "adrfam": "IPv4", 00:21:21.602 "traddr": "10.0.0.2", 00:21:21.602 "trsvcid": "4420" 00:21:21.602 }, 00:21:21.602 "peer_address": { 00:21:21.602 "trtype": "TCP", 00:21:21.602 "adrfam": "IPv4", 00:21:21.602 "traddr": "10.0.0.1", 00:21:21.602 "trsvcid": "40006" 00:21:21.602 }, 00:21:21.602 "auth": { 00:21:21.602 "state": "completed", 00:21:21.602 "digest": "sha384", 00:21:21.602 "dhgroup": "ffdhe6144" 00:21:21.602 } 00:21:21.602 } 00:21:21.602 ]' 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.602 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.169 06:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.105 06:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.670 00:21:23.671 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.671 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.671 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.928 { 00:21:23.928 "cntlid": 83, 00:21:23.928 "qid": 0, 00:21:23.928 "state": "enabled", 00:21:23.928 "listen_address": { 00:21:23.928 "trtype": "TCP", 00:21:23.928 "adrfam": "IPv4", 00:21:23.928 "traddr": "10.0.0.2", 00:21:23.928 "trsvcid": "4420" 00:21:23.928 }, 00:21:23.928 "peer_address": { 00:21:23.928 "trtype": "TCP", 00:21:23.928 "adrfam": "IPv4", 00:21:23.928 "traddr": "10.0.0.1", 00:21:23.928 "trsvcid": "40036" 00:21:23.928 }, 00:21:23.928 "auth": { 00:21:23.928 "state": "completed", 00:21:23.928 "digest": "sha384", 00:21:23.928 "dhgroup": "ffdhe6144" 00:21:23.928 } 00:21:23.928 } 00:21:23.928 ]' 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.928 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.184 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.184 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.184 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.184 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.184 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.441 06:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.373 06:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.631 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.632 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.632 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.632 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.197 00:21:26.197 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.197 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.197 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.455 { 00:21:26.455 "cntlid": 85, 00:21:26.455 "qid": 0, 00:21:26.455 "state": "enabled", 00:21:26.455 "listen_address": { 00:21:26.455 "trtype": "TCP", 00:21:26.455 "adrfam": "IPv4", 00:21:26.455 "traddr": "10.0.0.2", 00:21:26.455 "trsvcid": "4420" 00:21:26.455 }, 00:21:26.455 "peer_address": { 00:21:26.455 "trtype": "TCP", 00:21:26.455 "adrfam": "IPv4", 00:21:26.455 "traddr": "10.0.0.1", 00:21:26.455 "trsvcid": "50872" 00:21:26.455 }, 00:21:26.455 "auth": { 00:21:26.455 "state": "completed", 00:21:26.455 "digest": "sha384", 00:21:26.455 "dhgroup": "ffdhe6144" 00:21:26.455 } 00:21:26.455 } 00:21:26.455 ]' 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.455 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.456 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.456 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.456 06:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.713 06:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.646 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.904 06:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.469 00:21:28.469 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.469 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.469 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.725 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.725 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.726 06:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.726 06:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.726 06:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.726 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.726 { 00:21:28.726 "cntlid": 87, 00:21:28.726 "qid": 0, 00:21:28.726 "state": "enabled", 00:21:28.726 "listen_address": { 00:21:28.726 "trtype": "TCP", 00:21:28.726 "adrfam": "IPv4", 00:21:28.726 "traddr": "10.0.0.2", 00:21:28.726 "trsvcid": "4420" 00:21:28.726 }, 00:21:28.726 "peer_address": { 00:21:28.726 "trtype": "TCP", 00:21:28.726 "adrfam": "IPv4", 00:21:28.726 "traddr": "10.0.0.1", 00:21:28.726 "trsvcid": "50902" 00:21:28.726 }, 00:21:28.726 "auth": { 00:21:28.726 "state": "completed", 00:21:28.726 "digest": "sha384", 00:21:28.726 "dhgroup": "ffdhe6144" 00:21:28.726 } 00:21:28.726 } 00:21:28.726 ]' 00:21:28.726 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.010 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.283 06:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.213 06:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.471 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.402 00:21:31.402 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.402 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.402 06:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.659 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.660 { 00:21:31.660 "cntlid": 89, 00:21:31.660 "qid": 0, 00:21:31.660 "state": "enabled", 00:21:31.660 "listen_address": { 00:21:31.660 "trtype": "TCP", 00:21:31.660 "adrfam": "IPv4", 00:21:31.660 "traddr": "10.0.0.2", 00:21:31.660 "trsvcid": "4420" 00:21:31.660 }, 00:21:31.660 "peer_address": { 00:21:31.660 "trtype": "TCP", 00:21:31.660 "adrfam": "IPv4", 00:21:31.660 "traddr": "10.0.0.1", 00:21:31.660 "trsvcid": "50930" 00:21:31.660 }, 00:21:31.660 "auth": { 00:21:31.660 "state": "completed", 00:21:31.660 "digest": "sha384", 00:21:31.660 "dhgroup": "ffdhe8192" 00:21:31.660 } 00:21:31.660 } 00:21:31.660 ]' 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.660 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.917 06:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.288 06:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.221 00:21:34.221 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.221 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.221 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.478 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.478 { 00:21:34.478 "cntlid": 91, 00:21:34.478 "qid": 0, 00:21:34.478 "state": "enabled", 00:21:34.478 "listen_address": { 00:21:34.478 "trtype": "TCP", 00:21:34.478 "adrfam": "IPv4", 00:21:34.479 "traddr": "10.0.0.2", 00:21:34.479 "trsvcid": "4420" 00:21:34.479 }, 00:21:34.479 "peer_address": { 00:21:34.479 "trtype": "TCP", 00:21:34.479 "adrfam": "IPv4", 00:21:34.479 "traddr": "10.0.0.1", 00:21:34.479 "trsvcid": "60366" 00:21:34.479 }, 00:21:34.479 "auth": { 00:21:34.479 "state": "completed", 00:21:34.479 "digest": "sha384", 00:21:34.479 "dhgroup": "ffdhe8192" 00:21:34.479 } 00:21:34.479 } 00:21:34.479 ]' 00:21:34.479 06:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.479 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.479 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.479 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.479 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.736 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.737 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.737 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.994 06:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.928 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.185 06:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.118 00:21:37.118 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.118 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.118 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.376 { 00:21:37.376 "cntlid": 93, 00:21:37.376 "qid": 0, 00:21:37.376 "state": "enabled", 00:21:37.376 "listen_address": { 00:21:37.376 "trtype": "TCP", 00:21:37.376 "adrfam": "IPv4", 00:21:37.376 "traddr": "10.0.0.2", 00:21:37.376 "trsvcid": "4420" 00:21:37.376 }, 00:21:37.376 "peer_address": { 00:21:37.376 "trtype": "TCP", 00:21:37.376 "adrfam": "IPv4", 00:21:37.376 "traddr": "10.0.0.1", 00:21:37.376 "trsvcid": "60390" 00:21:37.376 }, 00:21:37.376 "auth": { 00:21:37.376 "state": "completed", 00:21:37.376 "digest": "sha384", 00:21:37.376 "dhgroup": "ffdhe8192" 00:21:37.376 } 00:21:37.376 } 00:21:37.376 ]' 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.376 06:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.634 06:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:38.567 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:38.824 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:38.824 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.824 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:38.824 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.825 06:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.758 00:21:39.758 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.758 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.758 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.016 { 00:21:40.016 "cntlid": 95, 00:21:40.016 "qid": 0, 00:21:40.016 "state": "enabled", 00:21:40.016 "listen_address": { 00:21:40.016 "trtype": "TCP", 00:21:40.016 "adrfam": "IPv4", 00:21:40.016 "traddr": "10.0.0.2", 00:21:40.016 "trsvcid": "4420" 00:21:40.016 }, 00:21:40.016 "peer_address": { 00:21:40.016 "trtype": "TCP", 00:21:40.016 "adrfam": "IPv4", 00:21:40.016 "traddr": "10.0.0.1", 00:21:40.016 "trsvcid": "60410" 00:21:40.016 }, 00:21:40.016 "auth": { 00:21:40.016 "state": "completed", 00:21:40.016 "digest": "sha384", 00:21:40.016 "dhgroup": "ffdhe8192" 00:21:40.016 } 00:21:40.016 } 00:21:40.016 ]' 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.016 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.274 06:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:41.207 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.207 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.207 06:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.207 06:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.464 06:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.722 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.980 00:21:41.980 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.980 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.980 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.239 { 00:21:42.239 "cntlid": 97, 00:21:42.239 "qid": 0, 00:21:42.239 "state": "enabled", 00:21:42.239 "listen_address": { 00:21:42.239 "trtype": "TCP", 00:21:42.239 "adrfam": "IPv4", 00:21:42.239 "traddr": "10.0.0.2", 00:21:42.239 "trsvcid": "4420" 00:21:42.239 }, 00:21:42.239 "peer_address": { 00:21:42.239 "trtype": "TCP", 00:21:42.239 "adrfam": "IPv4", 00:21:42.239 "traddr": "10.0.0.1", 00:21:42.239 "trsvcid": "60424" 00:21:42.239 }, 00:21:42.239 "auth": { 00:21:42.239 "state": "completed", 00:21:42.239 "digest": "sha512", 00:21:42.239 "dhgroup": "null" 00:21:42.239 } 00:21:42.239 } 00:21:42.239 ]' 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.239 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.497 06:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.457 06:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.725 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:43.725 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.726 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.292 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.292 { 00:21:44.292 "cntlid": 99, 00:21:44.292 "qid": 0, 00:21:44.292 "state": "enabled", 00:21:44.292 "listen_address": { 00:21:44.292 "trtype": "TCP", 00:21:44.292 "adrfam": "IPv4", 00:21:44.292 "traddr": "10.0.0.2", 00:21:44.292 "trsvcid": "4420" 00:21:44.292 }, 00:21:44.292 "peer_address": { 00:21:44.292 "trtype": "TCP", 00:21:44.292 "adrfam": "IPv4", 00:21:44.292 "traddr": "10.0.0.1", 00:21:44.292 "trsvcid": "57906" 00:21:44.292 }, 00:21:44.292 "auth": { 00:21:44.292 "state": "completed", 00:21:44.292 "digest": "sha512", 00:21:44.292 "dhgroup": "null" 00:21:44.292 } 00:21:44.292 } 00:21:44.292 ]' 00:21:44.292 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.549 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.549 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.549 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:44.549 06:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.549 06:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.549 06:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.549 06:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.807 06:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.741 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.998 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.254 00:21:46.511 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.511 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.511 06:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.769 { 00:21:46.769 "cntlid": 101, 00:21:46.769 "qid": 0, 00:21:46.769 "state": "enabled", 00:21:46.769 "listen_address": { 00:21:46.769 "trtype": "TCP", 00:21:46.769 "adrfam": "IPv4", 00:21:46.769 "traddr": "10.0.0.2", 00:21:46.769 "trsvcid": "4420" 00:21:46.769 }, 00:21:46.769 "peer_address": { 00:21:46.769 "trtype": "TCP", 00:21:46.769 "adrfam": "IPv4", 00:21:46.769 "traddr": "10.0.0.1", 00:21:46.769 "trsvcid": "57936" 00:21:46.769 }, 00:21:46.769 "auth": { 00:21:46.769 "state": "completed", 00:21:46.769 "digest": "sha512", 00:21:46.769 "dhgroup": "null" 00:21:46.769 } 00:21:46.769 } 00:21:46.769 ]' 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.769 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.026 06:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.959 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.217 06:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.781 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.781 { 00:21:48.781 "cntlid": 103, 00:21:48.781 "qid": 0, 00:21:48.781 "state": "enabled", 00:21:48.781 "listen_address": { 00:21:48.781 "trtype": "TCP", 00:21:48.781 "adrfam": "IPv4", 00:21:48.781 "traddr": "10.0.0.2", 00:21:48.781 "trsvcid": "4420" 00:21:48.781 }, 00:21:48.781 "peer_address": { 00:21:48.781 "trtype": "TCP", 00:21:48.781 "adrfam": "IPv4", 00:21:48.781 "traddr": "10.0.0.1", 00:21:48.781 "trsvcid": "57970" 00:21:48.781 }, 00:21:48.781 "auth": { 00:21:48.781 "state": "completed", 00:21:48.781 "digest": "sha512", 00:21:48.781 "dhgroup": "null" 00:21:48.781 } 00:21:48.781 } 00:21:48.781 ]' 00:21:48.781 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.039 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.296 06:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.227 06:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.483 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.048 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.048 { 00:21:51.048 "cntlid": 105, 00:21:51.048 "qid": 0, 00:21:51.048 "state": "enabled", 00:21:51.048 "listen_address": { 00:21:51.048 "trtype": "TCP", 00:21:51.048 "adrfam": "IPv4", 00:21:51.048 "traddr": "10.0.0.2", 00:21:51.048 "trsvcid": "4420" 00:21:51.048 }, 00:21:51.048 "peer_address": { 00:21:51.048 "trtype": "TCP", 00:21:51.048 "adrfam": "IPv4", 00:21:51.048 "traddr": "10.0.0.1", 00:21:51.048 "trsvcid": "58008" 00:21:51.048 }, 00:21:51.048 "auth": { 00:21:51.048 "state": "completed", 00:21:51.048 "digest": "sha512", 00:21:51.048 "dhgroup": "ffdhe2048" 00:21:51.048 } 00:21:51.048 } 00:21:51.048 ]' 00:21:51.048 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.306 06:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.564 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.498 06:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.756 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.014 00:21:53.014 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.014 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.014 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.271 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.271 { 00:21:53.271 "cntlid": 107, 00:21:53.271 "qid": 0, 00:21:53.271 "state": "enabled", 00:21:53.271 "listen_address": { 00:21:53.271 "trtype": "TCP", 00:21:53.271 "adrfam": "IPv4", 00:21:53.271 "traddr": "10.0.0.2", 00:21:53.271 "trsvcid": "4420" 00:21:53.272 }, 00:21:53.272 "peer_address": { 00:21:53.272 "trtype": "TCP", 00:21:53.272 "adrfam": "IPv4", 00:21:53.272 "traddr": "10.0.0.1", 00:21:53.272 "trsvcid": "58034" 00:21:53.272 }, 00:21:53.272 "auth": { 00:21:53.272 "state": "completed", 00:21:53.272 "digest": "sha512", 00:21:53.272 "dhgroup": "ffdhe2048" 00:21:53.272 } 00:21:53.272 } 00:21:53.272 ]' 00:21:53.272 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.529 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.529 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.529 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.530 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.530 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.530 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.530 06:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.787 06:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.720 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.978 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.543 00:21:55.543 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.543 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.543 06:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.800 { 00:21:55.800 "cntlid": 109, 00:21:55.800 "qid": 0, 00:21:55.800 "state": "enabled", 00:21:55.800 "listen_address": { 00:21:55.800 "trtype": "TCP", 00:21:55.800 "adrfam": "IPv4", 00:21:55.800 "traddr": "10.0.0.2", 00:21:55.800 "trsvcid": "4420" 00:21:55.800 }, 00:21:55.800 "peer_address": { 00:21:55.800 "trtype": "TCP", 00:21:55.800 "adrfam": "IPv4", 00:21:55.800 "traddr": "10.0.0.1", 00:21:55.800 "trsvcid": "42140" 00:21:55.800 }, 00:21:55.800 "auth": { 00:21:55.800 "state": "completed", 00:21:55.800 "digest": "sha512", 00:21:55.800 "dhgroup": "ffdhe2048" 00:21:55.800 } 00:21:55.800 } 00:21:55.800 ]' 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.800 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.056 06:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.988 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.280 06:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.538 00:21:57.538 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.538 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.538 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.796 { 00:21:57.796 "cntlid": 111, 00:21:57.796 "qid": 0, 00:21:57.796 "state": "enabled", 00:21:57.796 "listen_address": { 00:21:57.796 "trtype": "TCP", 00:21:57.796 "adrfam": "IPv4", 00:21:57.796 "traddr": "10.0.0.2", 00:21:57.796 "trsvcid": "4420" 00:21:57.796 }, 00:21:57.796 "peer_address": { 00:21:57.796 "trtype": "TCP", 00:21:57.796 "adrfam": "IPv4", 00:21:57.796 "traddr": "10.0.0.1", 00:21:57.796 "trsvcid": "42164" 00:21:57.796 }, 00:21:57.796 "auth": { 00:21:57.796 "state": "completed", 00:21:57.796 "digest": "sha512", 00:21:57.796 "dhgroup": "ffdhe2048" 00:21:57.796 } 00:21:57.796 } 00:21:57.796 ]' 00:21:57.796 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.054 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.311 06:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.244 06:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.502 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.759 00:22:00.017 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.017 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.017 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.275 { 00:22:00.275 "cntlid": 113, 00:22:00.275 "qid": 0, 00:22:00.275 "state": "enabled", 00:22:00.275 "listen_address": { 00:22:00.275 "trtype": "TCP", 00:22:00.275 "adrfam": "IPv4", 00:22:00.275 "traddr": "10.0.0.2", 00:22:00.275 "trsvcid": "4420" 00:22:00.275 }, 00:22:00.275 "peer_address": { 00:22:00.275 "trtype": "TCP", 00:22:00.275 "adrfam": "IPv4", 00:22:00.275 "traddr": "10.0.0.1", 00:22:00.275 "trsvcid": "42186" 00:22:00.275 }, 00:22:00.275 "auth": { 00:22:00.275 "state": "completed", 00:22:00.275 "digest": "sha512", 00:22:00.275 "dhgroup": "ffdhe3072" 00:22:00.275 } 00:22:00.275 } 00:22:00.275 ]' 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.275 06:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.532 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.464 06:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.722 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.286 00:22:02.286 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.286 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.286 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.543 { 00:22:02.543 "cntlid": 115, 00:22:02.543 "qid": 0, 00:22:02.543 "state": "enabled", 00:22:02.543 "listen_address": { 00:22:02.543 "trtype": "TCP", 00:22:02.543 "adrfam": "IPv4", 00:22:02.543 "traddr": "10.0.0.2", 00:22:02.543 "trsvcid": "4420" 00:22:02.543 }, 00:22:02.543 "peer_address": { 00:22:02.543 "trtype": "TCP", 00:22:02.543 "adrfam": "IPv4", 00:22:02.543 "traddr": "10.0.0.1", 00:22:02.543 "trsvcid": "42212" 00:22:02.543 }, 00:22:02.543 "auth": { 00:22:02.543 "state": "completed", 00:22:02.543 "digest": "sha512", 00:22:02.543 "dhgroup": "ffdhe3072" 00:22:02.543 } 00:22:02.543 } 00:22:02.543 ]' 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.543 06:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.543 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.543 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.543 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.543 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.543 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.800 06:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.733 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.990 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:03.990 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.991 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.248 06:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.248 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.248 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.506 00:22:04.506 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.506 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.506 06:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.765 { 00:22:04.765 "cntlid": 117, 00:22:04.765 "qid": 0, 00:22:04.765 "state": "enabled", 00:22:04.765 "listen_address": { 00:22:04.765 "trtype": "TCP", 00:22:04.765 "adrfam": "IPv4", 00:22:04.765 "traddr": "10.0.0.2", 00:22:04.765 "trsvcid": "4420" 00:22:04.765 }, 00:22:04.765 "peer_address": { 00:22:04.765 "trtype": "TCP", 00:22:04.765 "adrfam": "IPv4", 00:22:04.765 "traddr": "10.0.0.1", 00:22:04.765 "trsvcid": "51496" 00:22:04.765 }, 00:22:04.765 "auth": { 00:22:04.765 "state": "completed", 00:22:04.765 "digest": "sha512", 00:22:04.765 "dhgroup": "ffdhe3072" 00:22:04.765 } 00:22:04.765 } 00:22:04.765 ]' 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.765 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.023 06:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.394 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.395 06:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.959 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.959 { 00:22:06.959 "cntlid": 119, 00:22:06.959 "qid": 0, 00:22:06.959 "state": "enabled", 00:22:06.959 "listen_address": { 00:22:06.959 "trtype": "TCP", 00:22:06.959 "adrfam": "IPv4", 00:22:06.959 "traddr": "10.0.0.2", 00:22:06.959 "trsvcid": "4420" 00:22:06.959 }, 00:22:06.959 "peer_address": { 00:22:06.959 "trtype": "TCP", 00:22:06.959 "adrfam": "IPv4", 00:22:06.959 "traddr": "10.0.0.1", 00:22:06.959 "trsvcid": "51524" 00:22:06.959 }, 00:22:06.959 "auth": { 00:22:06.959 "state": "completed", 00:22:06.959 "digest": "sha512", 00:22:06.959 "dhgroup": "ffdhe3072" 00:22:06.959 } 00:22:06.959 } 00:22:06.959 ]' 00:22:06.959 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.217 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.474 06:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.405 06:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.661 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.919 00:22:08.919 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.919 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.919 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.176 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.176 { 00:22:09.176 "cntlid": 121, 00:22:09.176 "qid": 0, 00:22:09.176 "state": "enabled", 00:22:09.176 "listen_address": { 00:22:09.176 "trtype": "TCP", 00:22:09.176 "adrfam": "IPv4", 00:22:09.176 "traddr": "10.0.0.2", 00:22:09.176 "trsvcid": "4420" 00:22:09.176 }, 00:22:09.176 "peer_address": { 00:22:09.176 "trtype": "TCP", 00:22:09.176 "adrfam": "IPv4", 00:22:09.176 "traddr": "10.0.0.1", 00:22:09.176 "trsvcid": "51554" 00:22:09.176 }, 00:22:09.176 "auth": { 00:22:09.176 "state": "completed", 00:22:09.176 "digest": "sha512", 00:22:09.176 "dhgroup": "ffdhe4096" 00:22:09.176 } 00:22:09.176 } 00:22:09.176 ]' 00:22:09.177 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.434 06:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.692 06:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.625 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.883 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.170 00:22:11.170 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.170 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.170 06:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.428 { 00:22:11.428 "cntlid": 123, 00:22:11.428 "qid": 0, 00:22:11.428 "state": "enabled", 00:22:11.428 "listen_address": { 00:22:11.428 "trtype": "TCP", 00:22:11.428 "adrfam": "IPv4", 00:22:11.428 "traddr": "10.0.0.2", 00:22:11.428 "trsvcid": "4420" 00:22:11.428 }, 00:22:11.428 "peer_address": { 00:22:11.428 "trtype": "TCP", 00:22:11.428 "adrfam": "IPv4", 00:22:11.428 "traddr": "10.0.0.1", 00:22:11.428 "trsvcid": "51572" 00:22:11.428 }, 00:22:11.428 "auth": { 00:22:11.428 "state": "completed", 00:22:11.428 "digest": "sha512", 00:22:11.428 "dhgroup": "ffdhe4096" 00:22:11.428 } 00:22:11.428 } 00:22:11.428 ]' 00:22:11.428 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.686 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.944 06:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:12.878 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.136 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.394 00:22:13.394 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.394 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.394 06:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.651 { 00:22:13.651 "cntlid": 125, 00:22:13.651 "qid": 0, 00:22:13.651 "state": "enabled", 00:22:13.651 "listen_address": { 00:22:13.651 "trtype": "TCP", 00:22:13.651 "adrfam": "IPv4", 00:22:13.651 "traddr": "10.0.0.2", 00:22:13.651 "trsvcid": "4420" 00:22:13.651 }, 00:22:13.651 "peer_address": { 00:22:13.651 "trtype": "TCP", 00:22:13.651 "adrfam": "IPv4", 00:22:13.651 "traddr": "10.0.0.1", 00:22:13.651 "trsvcid": "51610" 00:22:13.651 }, 00:22:13.651 "auth": { 00:22:13.651 "state": "completed", 00:22:13.651 "digest": "sha512", 00:22:13.651 "dhgroup": "ffdhe4096" 00:22:13.651 } 00:22:13.651 } 00:22:13.651 ]' 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.651 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.908 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.908 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.908 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.908 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.908 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.165 06:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.098 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.354 06:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.612 00:22:15.612 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.612 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.612 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.867 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.867 { 00:22:15.867 "cntlid": 127, 00:22:15.867 "qid": 0, 00:22:15.867 "state": "enabled", 00:22:15.867 "listen_address": { 00:22:15.867 "trtype": "TCP", 00:22:15.867 "adrfam": "IPv4", 00:22:15.867 "traddr": "10.0.0.2", 00:22:15.867 "trsvcid": "4420" 00:22:15.867 }, 00:22:15.867 "peer_address": { 00:22:15.867 "trtype": "TCP", 00:22:15.867 "adrfam": "IPv4", 00:22:15.867 "traddr": "10.0.0.1", 00:22:15.868 "trsvcid": "40278" 00:22:15.868 }, 00:22:15.868 "auth": { 00:22:15.868 "state": "completed", 00:22:15.868 "digest": "sha512", 00:22:15.868 "dhgroup": "ffdhe4096" 00:22:15.868 } 00:22:15.868 } 00:22:15.868 ]' 00:22:15.868 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.123 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.124 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.380 06:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.312 06:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.569 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.134 00:22:18.134 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.134 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.134 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.391 { 00:22:18.391 "cntlid": 129, 00:22:18.391 "qid": 0, 00:22:18.391 "state": "enabled", 00:22:18.391 "listen_address": { 00:22:18.391 "trtype": "TCP", 00:22:18.391 "adrfam": "IPv4", 00:22:18.391 "traddr": "10.0.0.2", 00:22:18.391 "trsvcid": "4420" 00:22:18.391 }, 00:22:18.391 "peer_address": { 00:22:18.391 "trtype": "TCP", 00:22:18.391 "adrfam": "IPv4", 00:22:18.391 "traddr": "10.0.0.1", 00:22:18.391 "trsvcid": "40314" 00:22:18.391 }, 00:22:18.391 "auth": { 00:22:18.391 "state": "completed", 00:22:18.391 "digest": "sha512", 00:22:18.391 "dhgroup": "ffdhe6144" 00:22:18.391 } 00:22:18.391 } 00:22:18.391 ]' 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.391 06:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.648 06:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.648 06:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.648 06:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.905 06:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.838 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.097 06:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.663 00:22:20.663 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.663 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.663 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.921 { 00:22:20.921 "cntlid": 131, 00:22:20.921 "qid": 0, 00:22:20.921 "state": "enabled", 00:22:20.921 "listen_address": { 00:22:20.921 "trtype": "TCP", 00:22:20.921 "adrfam": "IPv4", 00:22:20.921 "traddr": "10.0.0.2", 00:22:20.921 "trsvcid": "4420" 00:22:20.921 }, 00:22:20.921 "peer_address": { 00:22:20.921 "trtype": "TCP", 00:22:20.921 "adrfam": "IPv4", 00:22:20.921 "traddr": "10.0.0.1", 00:22:20.921 "trsvcid": "40344" 00:22:20.921 }, 00:22:20.921 "auth": { 00:22:20.921 "state": "completed", 00:22:20.921 "digest": "sha512", 00:22:20.921 "dhgroup": "ffdhe6144" 00:22:20.921 } 00:22:20.921 } 00:22:20.921 ]' 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.921 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.179 06:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.114 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.372 06:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.939 00:22:22.939 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.939 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.939 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.196 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.196 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.196 06:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.197 { 00:22:23.197 "cntlid": 133, 00:22:23.197 "qid": 0, 00:22:23.197 "state": "enabled", 00:22:23.197 "listen_address": { 00:22:23.197 "trtype": "TCP", 00:22:23.197 "adrfam": "IPv4", 00:22:23.197 "traddr": "10.0.0.2", 00:22:23.197 "trsvcid": "4420" 00:22:23.197 }, 00:22:23.197 "peer_address": { 00:22:23.197 "trtype": "TCP", 00:22:23.197 "adrfam": "IPv4", 00:22:23.197 "traddr": "10.0.0.1", 00:22:23.197 "trsvcid": "40362" 00:22:23.197 }, 00:22:23.197 "auth": { 00:22:23.197 "state": "completed", 00:22:23.197 "digest": "sha512", 00:22:23.197 "dhgroup": "ffdhe6144" 00:22:23.197 } 00:22:23.197 } 00:22:23.197 ]' 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.197 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.454 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.454 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.454 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.454 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.454 06:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.712 06:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.647 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.940 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.941 06:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.508 00:22:25.508 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.508 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.508 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.769 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.769 { 00:22:25.769 "cntlid": 135, 00:22:25.769 "qid": 0, 00:22:25.769 "state": "enabled", 00:22:25.769 "listen_address": { 00:22:25.769 "trtype": "TCP", 00:22:25.769 "adrfam": "IPv4", 00:22:25.769 "traddr": "10.0.0.2", 00:22:25.769 "trsvcid": "4420" 00:22:25.769 }, 00:22:25.769 "peer_address": { 00:22:25.769 "trtype": "TCP", 00:22:25.769 "adrfam": "IPv4", 00:22:25.770 "traddr": "10.0.0.1", 00:22:25.770 "trsvcid": "52348" 00:22:25.770 }, 00:22:25.770 "auth": { 00:22:25.770 "state": "completed", 00:22:25.770 "digest": "sha512", 00:22:25.770 "dhgroup": "ffdhe6144" 00:22:25.770 } 00:22:25.770 } 00:22:25.770 ]' 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.770 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.029 06:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.399 06:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.332 00:22:28.332 06:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.332 06:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.332 06:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.590 06:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.590 06:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.590 06:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.590 06:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.590 { 00:22:28.590 "cntlid": 137, 00:22:28.590 "qid": 0, 00:22:28.590 "state": "enabled", 00:22:28.590 "listen_address": { 00:22:28.590 "trtype": "TCP", 00:22:28.590 "adrfam": "IPv4", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "trsvcid": "4420" 00:22:28.590 }, 00:22:28.590 "peer_address": { 00:22:28.590 "trtype": "TCP", 00:22:28.590 "adrfam": "IPv4", 00:22:28.590 "traddr": "10.0.0.1", 00:22:28.590 "trsvcid": "52378" 00:22:28.590 }, 00:22:28.590 "auth": { 00:22:28.590 "state": "completed", 00:22:28.590 "digest": "sha512", 00:22:28.590 "dhgroup": "ffdhe8192" 00:22:28.590 } 00:22:28.590 } 00:22:28.590 ]' 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.590 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.848 06:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.781 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.039 06:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.973 00:22:30.973 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.973 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.973 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.231 { 00:22:31.231 "cntlid": 139, 00:22:31.231 "qid": 0, 00:22:31.231 "state": "enabled", 00:22:31.231 "listen_address": { 00:22:31.231 "trtype": "TCP", 00:22:31.231 "adrfam": "IPv4", 00:22:31.231 "traddr": "10.0.0.2", 00:22:31.231 "trsvcid": "4420" 00:22:31.231 }, 00:22:31.231 "peer_address": { 00:22:31.231 "trtype": "TCP", 00:22:31.231 "adrfam": "IPv4", 00:22:31.231 "traddr": "10.0.0.1", 00:22:31.231 "trsvcid": "52410" 00:22:31.231 }, 00:22:31.231 "auth": { 00:22:31.231 "state": "completed", 00:22:31.231 "digest": "sha512", 00:22:31.231 "dhgroup": "ffdhe8192" 00:22:31.231 } 00:22:31.231 } 00:22:31.231 ]' 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.231 06:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.490 06:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDA5MDZlOGRmZGNiNTcyYmMzMDNiOWNhNThjYzAwMTfLPLug: --dhchap-ctrl-secret DHHC-1:02:OWY1MDFhMWM1NWRkODdmOTg0ZGNmN2Y5ZTY1YjIyNWFlOTkyODMyOWUwOWI2MjI27Hf5aQ==: 00:22:32.424 06:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.424 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.682 06:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.616 00:22:33.616 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.616 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.616 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.874 { 00:22:33.874 "cntlid": 141, 00:22:33.874 "qid": 0, 00:22:33.874 "state": "enabled", 00:22:33.874 "listen_address": { 00:22:33.874 "trtype": "TCP", 00:22:33.874 "adrfam": "IPv4", 00:22:33.874 "traddr": "10.0.0.2", 00:22:33.874 "trsvcid": "4420" 00:22:33.874 }, 00:22:33.874 "peer_address": { 00:22:33.874 "trtype": "TCP", 00:22:33.874 "adrfam": "IPv4", 00:22:33.874 "traddr": "10.0.0.1", 00:22:33.874 "trsvcid": "52444" 00:22:33.874 }, 00:22:33.874 "auth": { 00:22:33.874 "state": "completed", 00:22:33.874 "digest": "sha512", 00:22:33.874 "dhgroup": "ffdhe8192" 00:22:33.874 } 00:22:33.874 } 00:22:33.874 ]' 00:22:33.874 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.132 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.389 06:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjU5NjNhZjk3MDBkYzc4MGVlYjA2OTVkNTIzNjFlMzZkZjg2YjUwYWI4ZGY1OTlj3KwtUw==: --dhchap-ctrl-secret DHHC-1:01:ZThmN2I2ZGM3OGM0OWE4YjRiYzJjN2QyYTExYjA2YzKWvygO: 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:35.323 06:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.580 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.581 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.510 00:22:36.510 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.510 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.510 06:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.767 { 00:22:36.767 "cntlid": 143, 00:22:36.767 "qid": 0, 00:22:36.767 "state": "enabled", 00:22:36.767 "listen_address": { 00:22:36.767 "trtype": "TCP", 00:22:36.767 "adrfam": "IPv4", 00:22:36.767 "traddr": "10.0.0.2", 00:22:36.767 "trsvcid": "4420" 00:22:36.767 }, 00:22:36.767 "peer_address": { 00:22:36.767 "trtype": "TCP", 00:22:36.767 "adrfam": "IPv4", 00:22:36.767 "traddr": "10.0.0.1", 00:22:36.767 "trsvcid": "45248" 00:22:36.767 }, 00:22:36.767 "auth": { 00:22:36.767 "state": "completed", 00:22:36.767 "digest": "sha512", 00:22:36.767 "dhgroup": "ffdhe8192" 00:22:36.767 } 00:22:36.767 } 00:22:36.767 ]' 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.767 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.331 06:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.263 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.520 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.521 06:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.487 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.487 06:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.487 06:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.487 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.487 { 00:22:39.487 "cntlid": 145, 00:22:39.487 "qid": 0, 00:22:39.487 "state": "enabled", 00:22:39.487 "listen_address": { 00:22:39.487 "trtype": "TCP", 00:22:39.487 "adrfam": "IPv4", 00:22:39.487 "traddr": "10.0.0.2", 00:22:39.487 "trsvcid": "4420" 00:22:39.487 }, 00:22:39.487 "peer_address": { 00:22:39.487 "trtype": "TCP", 00:22:39.487 "adrfam": "IPv4", 00:22:39.487 "traddr": "10.0.0.1", 00:22:39.487 "trsvcid": "45272" 00:22:39.487 }, 00:22:39.487 "auth": { 00:22:39.487 "state": "completed", 00:22:39.487 "digest": "sha512", 00:22:39.487 "dhgroup": "ffdhe8192" 00:22:39.487 } 00:22:39.487 } 00:22:39.487 ]' 00:22:39.487 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.487 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.487 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.745 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:39.745 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.745 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.745 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.745 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.001 06:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzFiZDg4NmIwZWFmNDBmNjM5MGU3NzcxNTlmYTEwZjI4ZjcwZDg0ZjgwZDY5ZGI0GGT4dQ==: --dhchap-ctrl-secret DHHC-1:03:NGUwYjI2ZTE4MGI3Y2YxZGJjODgzNTM0ODYxNjgwMDViMmQyODlkY2U4MDU1MGI4OTUwNTc3ZDU5ZWFkMzIxNJGkDLI=: 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:40.932 06:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:41.864 request: 00:22:41.864 { 00:22:41.864 "name": "nvme0", 00:22:41.864 "trtype": "tcp", 00:22:41.864 "traddr": "10.0.0.2", 00:22:41.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.864 "adrfam": "ipv4", 00:22:41.864 "trsvcid": "4420", 00:22:41.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.864 "dhchap_key": "key2", 00:22:41.864 "method": "bdev_nvme_attach_controller", 00:22:41.864 "req_id": 1 00:22:41.864 } 00:22:41.864 Got JSON-RPC error response 00:22:41.864 response: 00:22:41.864 { 00:22:41.864 "code": -5, 00:22:41.864 "message": "Input/output error" 00:22:41.864 } 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:41.864 06:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:42.798 request: 00:22:42.798 { 00:22:42.798 "name": "nvme0", 00:22:42.798 "trtype": "tcp", 00:22:42.798 "traddr": "10.0.0.2", 00:22:42.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:42.798 "adrfam": "ipv4", 00:22:42.798 "trsvcid": "4420", 00:22:42.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:42.798 "dhchap_key": "key1", 00:22:42.798 "dhchap_ctrlr_key": "ckey2", 00:22:42.798 "method": "bdev_nvme_attach_controller", 00:22:42.798 "req_id": 1 00:22:42.798 } 00:22:42.798 Got JSON-RPC error response 00:22:42.798 response: 00:22:42.798 { 00:22:42.798 "code": -5, 00:22:42.798 "message": "Input/output error" 00:22:42.798 } 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.798 06:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.361 request: 00:22:43.361 { 00:22:43.361 "name": "nvme0", 00:22:43.361 "trtype": "tcp", 00:22:43.361 "traddr": "10.0.0.2", 00:22:43.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.361 "adrfam": "ipv4", 00:22:43.361 "trsvcid": "4420", 00:22:43.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:43.361 "dhchap_key": "key1", 00:22:43.361 "dhchap_ctrlr_key": "ckey1", 00:22:43.361 "method": "bdev_nvme_attach_controller", 00:22:43.361 "req_id": 1 00:22:43.361 } 00:22:43.361 Got JSON-RPC error response 00:22:43.361 response: 00:22:43.361 { 00:22:43.361 "code": -5, 00:22:43.361 "message": "Input/output error" 00:22:43.361 } 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.618 06:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 650690 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 650690 ']' 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 650690 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 650690 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 650690' 00:22:43.618 killing process with pid 650690 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 650690 00:22:43.618 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 650690 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=673185 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 673185 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 673185 ']' 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:43.876 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 673185 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 673185 ']' 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.133 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.391 06:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:45.318 00:22:45.318 06:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.318 06:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.318 06:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.575 { 00:22:45.575 "cntlid": 1, 00:22:45.575 "qid": 0, 00:22:45.575 "state": "enabled", 00:22:45.575 "listen_address": { 00:22:45.575 "trtype": "TCP", 00:22:45.575 "adrfam": "IPv4", 00:22:45.575 "traddr": "10.0.0.2", 00:22:45.575 "trsvcid": "4420" 00:22:45.575 }, 00:22:45.575 "peer_address": { 00:22:45.575 "trtype": "TCP", 00:22:45.575 "adrfam": "IPv4", 00:22:45.575 "traddr": "10.0.0.1", 00:22:45.575 "trsvcid": "59540" 00:22:45.575 }, 00:22:45.575 "auth": { 00:22:45.575 "state": "completed", 00:22:45.575 "digest": "sha512", 00:22:45.575 "dhgroup": "ffdhe8192" 00:22:45.575 } 00:22:45.575 } 00:22:45.575 ]' 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.575 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.831 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.832 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.832 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.088 06:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2FlYmY2ZTE1OWE3ZWY4ZTY4OWZjMDExOGU5Y2U1NDk5YTM0OTNiYjU2NWIwNTM3NDMzMWNkYzNjNmYyNjcyNedk6Lo=: 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:47.018 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.276 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.533 request: 00:22:47.533 { 00:22:47.533 "name": "nvme0", 00:22:47.533 "trtype": "tcp", 00:22:47.533 "traddr": "10.0.0.2", 00:22:47.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.533 "adrfam": "ipv4", 00:22:47.533 "trsvcid": "4420", 00:22:47.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:47.533 "dhchap_key": "key3", 00:22:47.533 "method": "bdev_nvme_attach_controller", 00:22:47.533 "req_id": 1 00:22:47.533 } 00:22:47.533 Got JSON-RPC error response 00:22:47.533 response: 00:22:47.533 { 00:22:47.533 "code": -5, 00:22:47.533 "message": "Input/output error" 00:22:47.533 } 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:47.533 06:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.791 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.048 request: 00:22:48.048 { 00:22:48.048 "name": "nvme0", 00:22:48.048 "trtype": "tcp", 00:22:48.048 "traddr": "10.0.0.2", 00:22:48.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:48.048 "adrfam": "ipv4", 00:22:48.048 "trsvcid": "4420", 00:22:48.048 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.048 "dhchap_key": "key3", 00:22:48.048 "method": "bdev_nvme_attach_controller", 00:22:48.048 "req_id": 1 00:22:48.048 } 00:22:48.048 Got JSON-RPC error response 00:22:48.048 response: 00:22:48.048 { 00:22:48.048 "code": -5, 00:22:48.048 "message": "Input/output error" 00:22:48.048 } 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.048 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.049 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.306 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.563 request: 00:22:48.563 { 00:22:48.563 "name": "nvme0", 00:22:48.563 "trtype": "tcp", 00:22:48.563 "traddr": "10.0.0.2", 00:22:48.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:48.563 "adrfam": "ipv4", 00:22:48.563 "trsvcid": "4420", 00:22:48.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.563 "dhchap_key": "key0", 00:22:48.563 "dhchap_ctrlr_key": "key1", 00:22:48.563 "method": "bdev_nvme_attach_controller", 00:22:48.563 "req_id": 1 00:22:48.563 } 00:22:48.563 Got JSON-RPC error response 00:22:48.563 response: 00:22:48.563 { 00:22:48.563 "code": -5, 00:22:48.563 "message": "Input/output error" 00:22:48.563 } 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:48.563 06:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:48.820 00:22:48.820 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:48.820 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:48.820 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.078 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.078 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.078 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 650719 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 650719 ']' 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 650719 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 650719 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 650719' 00:22:49.336 killing process with pid 650719 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 650719 00:22:49.336 06:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 650719 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.902 rmmod nvme_tcp 00:22:49.902 rmmod nvme_fabrics 00:22:49.902 rmmod nvme_keyring 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 673185 ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 673185 ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 673185' 00:22:49.902 killing process with pid 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 673185 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.902 06:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.440 06:51:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.440 06:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uhI /tmp/spdk.key-sha256.1x9 /tmp/spdk.key-sha384.GYh /tmp/spdk.key-sha512.DTD /tmp/spdk.key-sha512.BpQ /tmp/spdk.key-sha384.UEn /tmp/spdk.key-sha256.Dl0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:52.440 00:22:52.440 real 3m8.723s 00:22:52.440 user 7m19.627s 00:22:52.440 sys 0m24.756s 00:22:52.440 06:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:52.440 06:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 ************************************ 00:22:52.440 END TEST nvmf_auth_target 00:22:52.440 ************************************ 00:22:52.440 06:51:39 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:52.440 06:51:39 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:52.440 06:51:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:52.440 06:51:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:52.440 06:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 ************************************ 00:22:52.440 START TEST nvmf_bdevio_no_huge 00:22:52.440 ************************************ 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:52.440 * Looking for test storage... 00:22:52.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.440 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.441 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.441 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.441 06:51:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:54.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:54.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:54.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.343 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:54.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:22:54.344 00:22:54.344 --- 10.0.0.2 ping statistics --- 00:22:54.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.344 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:54.344 00:22:54.344 --- 10.0.0.1 ping statistics --- 00:22:54.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.344 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=675825 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 675825 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 675825 ']' 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.344 06:51:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.344 [2024-07-15 06:51:41.756886] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:54.344 [2024-07-15 06:51:41.756995] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:54.344 [2024-07-15 06:51:41.830641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.344 [2024-07-15 06:51:41.922334] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.344 [2024-07-15 06:51:41.922397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.344 [2024-07-15 06:51:41.922423] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.344 [2024-07-15 06:51:41.922436] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.344 [2024-07-15 06:51:41.922448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.344 [2024-07-15 06:51:41.922581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.344 [2024-07-15 06:51:41.922669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:54.344 [2024-07-15 06:51:41.922788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:54.344 [2024-07-15 06:51:41.922791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 [2024-07-15 06:51:42.046048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 Malloc0 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.608 [2024-07-15 06:51:42.083988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.608 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.608 { 00:22:54.608 "params": { 00:22:54.608 "name": "Nvme$subsystem", 00:22:54.608 "trtype": "$TEST_TRANSPORT", 00:22:54.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.608 "adrfam": "ipv4", 00:22:54.608 "trsvcid": "$NVMF_PORT", 00:22:54.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.608 "hdgst": ${hdgst:-false}, 00:22:54.608 "ddgst": ${ddgst:-false} 00:22:54.608 }, 00:22:54.609 "method": "bdev_nvme_attach_controller" 00:22:54.609 } 00:22:54.609 EOF 00:22:54.609 )") 00:22:54.609 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:54.609 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:54.609 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:54.609 06:51:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:54.609 "params": { 00:22:54.609 "name": "Nvme1", 00:22:54.609 "trtype": "tcp", 00:22:54.609 "traddr": "10.0.0.2", 00:22:54.609 "adrfam": "ipv4", 00:22:54.609 "trsvcid": "4420", 00:22:54.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.609 "hdgst": false, 00:22:54.609 "ddgst": false 00:22:54.609 }, 00:22:54.609 "method": "bdev_nvme_attach_controller" 00:22:54.609 }' 00:22:54.609 [2024-07-15 06:51:42.133676] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:54.609 [2024-07-15 06:51:42.133753] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid675932 ] 00:22:54.609 [2024-07-15 06:51:42.197828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:54.875 [2024-07-15 06:51:42.283568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.875 [2024-07-15 06:51:42.283621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.875 [2024-07-15 06:51:42.283624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.133 I/O targets: 00:22:55.133 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:55.133 00:22:55.133 00:22:55.133 CUnit - A unit testing framework for C - Version 2.1-3 00:22:55.133 http://cunit.sourceforge.net/ 00:22:55.133 00:22:55.133 00:22:55.133 Suite: bdevio tests on: Nvme1n1 00:22:55.133 Test: blockdev write read block ...passed 00:22:55.133 Test: blockdev write zeroes read block ...passed 00:22:55.133 Test: blockdev write zeroes read no split ...passed 00:22:55.133 Test: blockdev write zeroes read split ...passed 00:22:55.415 Test: blockdev write zeroes read split partial ...passed 00:22:55.415 Test: blockdev reset ...[2024-07-15 06:51:42.766316] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.415 [2024-07-15 06:51:42.766423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d1a00 (9): Bad file descriptor 00:22:55.415 [2024-07-15 06:51:42.818834] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:55.415 passed 00:22:55.415 Test: blockdev write read 8 blocks ...passed 00:22:55.415 Test: blockdev write read size > 128k ...passed 00:22:55.415 Test: blockdev write read invalid size ...passed 00:22:55.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.415 Test: blockdev write read max offset ...passed 00:22:55.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.415 Test: blockdev writev readv 8 blocks ...passed 00:22:55.415 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.674 Test: blockdev writev readv block ...passed 00:22:55.674 Test: blockdev writev readv size > 128k ...passed 00:22:55.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.674 Test: blockdev comparev and writev ...[2024-07-15 06:51:43.034617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.674 [2024-07-15 06:51:43.034653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.674 [2024-07-15 06:51:43.034685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.674 [2024-07-15 06:51:43.034702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.674 [2024-07-15 06:51:43.035079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.674 [2024-07-15 06:51:43.035104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:55.674 [2024-07-15 06:51:43.035127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.674 [2024-07-15 06:51:43.035143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:55.674 [2024-07-15 06:51:43.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.674 [2024-07-15 06:51:43.035516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:55.674 [2024-07-15 06:51:43.035537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.675 [2024-07-15 06:51:43.035559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:55.675 [2024-07-15 06:51:43.035890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.675 [2024-07-15 06:51:43.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:55.675 [2024-07-15 06:51:43.035937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.675 [2024-07-15 06:51:43.035953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:55.675 passed 00:22:55.675 Test: blockdev nvme passthru rw ...passed 00:22:55.675 Test: blockdev nvme passthru vendor specific ...[2024-07-15 06:51:43.120188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.675 [2024-07-15 06:51:43.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:55.675 [2024-07-15 06:51:43.120384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.675 [2024-07-15 06:51:43.120407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:55.675 [2024-07-15 06:51:43.120570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.675 [2024-07-15 06:51:43.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:55.675 [2024-07-15 06:51:43.120759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.675 [2024-07-15 06:51:43.120782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:55.675 passed 00:22:55.675 Test: blockdev nvme admin passthru ...passed 00:22:55.675 Test: blockdev copy ...passed 00:22:55.675 00:22:55.675 Run Summary: Type Total Ran Passed Failed Inactive 00:22:55.675 suites 1 1 n/a 0 0 00:22:55.675 tests 23 23 23 0 0 00:22:55.675 asserts 152 152 152 0 n/a 00:22:55.675 00:22:55.675 Elapsed time = 1.243 seconds 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.933 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.933 rmmod nvme_tcp 00:22:55.933 rmmod nvme_fabrics 00:22:56.190 rmmod nvme_keyring 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 675825 ']' 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 675825 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 675825 ']' 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 675825 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 675825 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 675825' 00:22:56.190 killing process with pid 675825 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 675825 00:22:56.190 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 675825 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.449 06:51:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.977 06:51:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.977 00:22:58.977 real 0m6.436s 00:22:58.977 user 0m11.009s 00:22:58.978 sys 0m2.427s 00:22:58.978 06:51:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:58.978 06:51:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.978 ************************************ 00:22:58.978 END TEST nvmf_bdevio_no_huge 00:22:58.978 ************************************ 00:22:58.978 06:51:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:58.978 06:51:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:58.978 06:51:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:58.978 06:51:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.978 ************************************ 00:22:58.978 START TEST nvmf_tls 00:22:58.978 ************************************ 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:58.978 * Looking for test storage... 00:22:58.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.978 06:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:00.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:00.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:00.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:00.877 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.877 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:23:00.878 00:23:00.878 --- 10.0.0.2 ping statistics --- 00:23:00.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.878 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:23:00.878 00:23:00.878 --- 10.0.0.1 ping statistics --- 00:23:00.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.878 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=678043 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 678043 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 678043 ']' 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.878 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.878 [2024-07-15 06:51:48.284632] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:00.878 [2024-07-15 06:51:48.284712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.878 [2024-07-15 06:51:48.354913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.878 [2024-07-15 06:51:48.446310] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.878 [2024-07-15 06:51:48.446363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.878 [2024-07-15 06:51:48.446376] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.878 [2024-07-15 06:51:48.446387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.878 [2024-07-15 06:51:48.446405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.878 [2024-07-15 06:51:48.446431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:01.135 06:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:01.393 true 00:23:01.393 06:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.393 06:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:01.650 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:01.650 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:01.650 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:01.909 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.909 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:02.166 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:02.167 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:02.167 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:02.424 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.424 06:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:02.681 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:02.681 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:02.681 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.681 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:02.939 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:02.939 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:02.939 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:03.197 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.197 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:03.455 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:03.455 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:03.455 06:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:03.713 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.713 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.971 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.2Ic3pEqkJZ 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.utI3ENk7Dz 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2Ic3pEqkJZ 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.utI3ENk7Dz 00:23:03.972 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:04.230 06:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:04.797 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2Ic3pEqkJZ 00:23:04.797 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2Ic3pEqkJZ 00:23:04.797 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.797 [2024-07-15 06:51:52.390758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.797 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.055 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.313 [2024-07-15 06:51:52.916153] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.313 [2024-07-15 06:51:52.916403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.571 06:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.828 malloc0 00:23:05.828 06:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.085 06:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2Ic3pEqkJZ 00:23:06.085 [2024-07-15 06:51:53.685219] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.342 06:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2Ic3pEqkJZ 00:23:06.342 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.313 Initializing NVMe Controllers 00:23:16.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.313 Initialization complete. Launching workers. 00:23:16.313 ======================================================== 00:23:16.313 Latency(us) 00:23:16.313 Device Information : IOPS MiB/s Average min max 00:23:16.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7919.54 30.94 8083.98 1046.81 9486.48 00:23:16.313 ======================================================== 00:23:16.313 Total : 7919.54 30.94 8083.98 1046.81 9486.48 00:23:16.313 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2Ic3pEqkJZ 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2Ic3pEqkJZ' 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=679933 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 679933 /var/tmp/bdevperf.sock 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 679933 ']' 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.313 06:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.313 [2024-07-15 06:52:03.863844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:16.313 [2024-07-15 06:52:03.863986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679933 ] 00:23:16.313 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.313 [2024-07-15 06:52:03.924497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.571 [2024-07-15 06:52:04.008485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.571 06:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.571 06:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.571 06:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2Ic3pEqkJZ 00:23:16.829 [2024-07-15 06:52:04.328382] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.829 [2024-07-15 06:52:04.328510] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:16.829 TLSTESTn1 00:23:16.829 06:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.086 Running I/O for 10 seconds... 00:23:27.048 00:23:27.048 Latency(us) 00:23:27.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.048 Verification LBA range: start 0x0 length 0x2000 00:23:27.048 TLSTESTn1 : 10.03 3527.20 13.78 0.00 0.00 36201.42 9660.49 47768.46 00:23:27.048 =================================================================================================================== 00:23:27.048 Total : 3527.20 13.78 0.00 0.00 36201.42 9660.49 47768.46 00:23:27.048 0 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 679933 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 679933 ']' 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 679933 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 679933 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 679933' 00:23:27.048 killing process with pid 679933 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 679933 00:23:27.048 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.048 00:23:27.048 Latency(us) 00:23:27.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.048 =================================================================================================================== 00:23:27.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.048 [2024-07-15 06:52:14.622037] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.048 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 679933 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.utI3ENk7Dz 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.utI3ENk7Dz 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.utI3ENk7Dz 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.utI3ENk7Dz' 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=681129 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 681129 /var/tmp/bdevperf.sock 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681129 ']' 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:27.305 06:52:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.305 [2024-07-15 06:52:14.894942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:27.305 [2024-07-15 06:52:14.895038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681129 ] 00:23:27.562 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.562 [2024-07-15 06:52:14.956924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.562 [2024-07-15 06:52:15.039690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.562 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:27.562 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:27.562 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.utI3ENk7Dz 00:23:27.819 [2024-07-15 06:52:15.374920] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.819 [2024-07-15 06:52:15.375038] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:27.819 [2024-07-15 06:52:15.380501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:27.819 [2024-07-15 06:52:15.380922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37ed0 (107): Transport endpoint is not connected 00:23:27.819 [2024-07-15 06:52:15.381911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37ed0 (9): Bad file descriptor 00:23:27.819 [2024-07-15 06:52:15.382910] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:27.819 [2024-07-15 06:52:15.382946] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:27.819 [2024-07-15 06:52:15.382963] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:27.819 request: 00:23:27.819 { 00:23:27.819 "name": "TLSTEST", 00:23:27.819 "trtype": "tcp", 00:23:27.820 "traddr": "10.0.0.2", 00:23:27.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.820 "adrfam": "ipv4", 00:23:27.820 "trsvcid": "4420", 00:23:27.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.820 "psk": "/tmp/tmp.utI3ENk7Dz", 00:23:27.820 "method": "bdev_nvme_attach_controller", 00:23:27.820 "req_id": 1 00:23:27.820 } 00:23:27.820 Got JSON-RPC error response 00:23:27.820 response: 00:23:27.820 { 00:23:27.820 "code": -5, 00:23:27.820 "message": "Input/output error" 00:23:27.820 } 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 681129 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681129 ']' 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681129 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681129 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681129' 00:23:27.820 killing process with pid 681129 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681129 00:23:27.820 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.820 00:23:27.820 Latency(us) 00:23:27.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.820 =================================================================================================================== 00:23:27.820 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.820 [2024-07-15 06:52:15.427407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.820 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681129 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2Ic3pEqkJZ 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2Ic3pEqkJZ 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2Ic3pEqkJZ 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2Ic3pEqkJZ' 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=681268 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 681268 /var/tmp/bdevperf.sock 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681268 ']' 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.077 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.077 [2024-07-15 06:52:15.660885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.077 [2024-07-15 06:52:15.660976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681268 ] 00:23:28.077 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.334 [2024-07-15 06:52:15.724219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.334 [2024-07-15 06:52:15.814440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.334 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.334 06:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.334 06:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2Ic3pEqkJZ 00:23:28.591 [2024-07-15 06:52:16.145346] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.591 [2024-07-15 06:52:16.145466] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.591 [2024-07-15 06:52:16.150650] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.591 [2024-07-15 06:52:16.150683] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.591 [2024-07-15 06:52:16.150724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.591 [2024-07-15 06:52:16.151293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211fed0 (107): Transport endpoint is not connected 00:23:28.591 [2024-07-15 06:52:16.152280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211fed0 (9): Bad file descriptor 00:23:28.591 [2024-07-15 06:52:16.153278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.591 [2024-07-15 06:52:16.153298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.591 [2024-07-15 06:52:16.153314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.591 request: 00:23:28.591 { 00:23:28.591 "name": "TLSTEST", 00:23:28.591 "trtype": "tcp", 00:23:28.591 "traddr": "10.0.0.2", 00:23:28.591 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.591 "adrfam": "ipv4", 00:23:28.591 "trsvcid": "4420", 00:23:28.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.591 "psk": "/tmp/tmp.2Ic3pEqkJZ", 00:23:28.591 "method": "bdev_nvme_attach_controller", 00:23:28.591 "req_id": 1 00:23:28.591 } 00:23:28.591 Got JSON-RPC error response 00:23:28.591 response: 00:23:28.591 { 00:23:28.591 "code": -5, 00:23:28.591 "message": "Input/output error" 00:23:28.591 } 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 681268 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681268 ']' 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681268 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681268 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681268' 00:23:28.591 killing process with pid 681268 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681268 00:23:28.591 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.591 00:23:28.591 Latency(us) 00:23:28.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.591 =================================================================================================================== 00:23:28.591 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.591 [2024-07-15 06:52:16.204758] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.591 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681268 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2Ic3pEqkJZ 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2Ic3pEqkJZ 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2Ic3pEqkJZ 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2Ic3pEqkJZ' 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=681398 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 681398 /var/tmp/bdevperf.sock 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681398 ']' 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.849 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.849 [2024-07-15 06:52:16.454492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.849 [2024-07-15 06:52:16.454572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681398 ] 00:23:29.107 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.107 [2024-07-15 06:52:16.514623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.107 [2024-07-15 06:52:16.597781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.107 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.107 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.107 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2Ic3pEqkJZ 00:23:29.365 [2024-07-15 06:52:16.925037] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.365 [2024-07-15 06:52:16.925165] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:29.365 [2024-07-15 06:52:16.931546] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.365 [2024-07-15 06:52:16.931575] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.365 [2024-07-15 06:52:16.931637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.365 [2024-07-15 06:52:16.931977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208aed0 (107): Transport endpoint is not connected 00:23:29.365 [2024-07-15 06:52:16.932964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208aed0 (9): Bad file descriptor 00:23:29.365 [2024-07-15 06:52:16.933964] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:29.365 [2024-07-15 06:52:16.933983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.365 [2024-07-15 06:52:16.933999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:29.365 request: 00:23:29.365 { 00:23:29.365 "name": "TLSTEST", 00:23:29.365 "trtype": "tcp", 00:23:29.365 "traddr": "10.0.0.2", 00:23:29.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.365 "adrfam": "ipv4", 00:23:29.365 "trsvcid": "4420", 00:23:29.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.365 "psk": "/tmp/tmp.2Ic3pEqkJZ", 00:23:29.365 "method": "bdev_nvme_attach_controller", 00:23:29.365 "req_id": 1 00:23:29.365 } 00:23:29.365 Got JSON-RPC error response 00:23:29.365 response: 00:23:29.365 { 00:23:29.365 "code": -5, 00:23:29.365 "message": "Input/output error" 00:23:29.365 } 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 681398 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681398 ']' 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681398 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681398 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681398' 00:23:29.365 killing process with pid 681398 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681398 00:23:29.365 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.365 00:23:29.365 Latency(us) 00:23:29.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.365 =================================================================================================================== 00:23:29.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.365 [2024-07-15 06:52:16.977694] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.365 06:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681398 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=681420 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 681420 /var/tmp/bdevperf.sock 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681420 ']' 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.624 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.624 [2024-07-15 06:52:17.207354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:29.624 [2024-07-15 06:52:17.207442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681420 ] 00:23:29.624 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.882 [2024-07-15 06:52:17.269732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.882 [2024-07-15 06:52:17.355833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.882 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.882 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.882 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:30.141 [2024-07-15 06:52:17.693676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.141 [2024-07-15 06:52:17.694952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975c0 (9): Bad file descriptor 00:23:30.141 [2024-07-15 06:52:17.695946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.141 [2024-07-15 06:52:17.695967] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.141 [2024-07-15 06:52:17.695985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.141 request: 00:23:30.141 { 00:23:30.141 "name": "TLSTEST", 00:23:30.141 "trtype": "tcp", 00:23:30.141 "traddr": "10.0.0.2", 00:23:30.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.141 "adrfam": "ipv4", 00:23:30.141 "trsvcid": "4420", 00:23:30.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.141 "method": "bdev_nvme_attach_controller", 00:23:30.141 "req_id": 1 00:23:30.141 } 00:23:30.141 Got JSON-RPC error response 00:23:30.141 response: 00:23:30.141 { 00:23:30.141 "code": -5, 00:23:30.141 "message": "Input/output error" 00:23:30.141 } 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 681420 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681420 ']' 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681420 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681420 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681420' 00:23:30.141 killing process with pid 681420 00:23:30.141 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681420 00:23:30.141 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.142 00:23:30.142 Latency(us) 00:23:30.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.142 =================================================================================================================== 00:23:30.142 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.142 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681420 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 678043 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 678043 ']' 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 678043 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 678043 00:23:30.401 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:30.402 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:30.402 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 678043' 00:23:30.402 killing process with pid 678043 00:23:30.402 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 678043 00:23:30.402 [2024-07-15 06:52:17.990436] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:30.402 06:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 678043 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:30.660 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ICBIamNi51 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ICBIamNi51 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=681567 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 681567 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681567 ']' 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.919 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.919 [2024-07-15 06:52:18.344887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:30.919 [2024-07-15 06:52:18.344998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.919 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.919 [2024-07-15 06:52:18.414546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.919 [2024-07-15 06:52:18.504468] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.919 [2024-07-15 06:52:18.504529] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.919 [2024-07-15 06:52:18.504556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.919 [2024-07-15 06:52:18.504571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.919 [2024-07-15 06:52:18.504584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.919 [2024-07-15 06:52:18.504620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ICBIamNi51 00:23:31.177 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.435 [2024-07-15 06:52:18.862632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.435 06:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.693 06:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.952 [2024-07-15 06:52:19.323846] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.952 [2024-07-15 06:52:19.324106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.952 06:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:32.212 malloc0 00:23:32.213 06:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.473 06:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:32.473 [2024-07-15 06:52:20.061983] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ICBIamNi51 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ICBIamNi51' 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=681851 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 681851 /var/tmp/bdevperf.sock 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 681851 ']' 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.473 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.733 [2024-07-15 06:52:20.127956] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:32.733 [2024-07-15 06:52:20.128035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681851 ] 00:23:32.733 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.733 [2024-07-15 06:52:20.189202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.733 [2024-07-15 06:52:20.288019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.992 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.992 06:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.992 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:33.251 [2024-07-15 06:52:20.622449] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.251 [2024-07-15 06:52:20.622578] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:33.251 TLSTESTn1 00:23:33.251 06:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.251 Running I/O for 10 seconds... 00:23:43.264 00:23:43.264 Latency(us) 00:23:43.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.264 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.264 Verification LBA range: start 0x0 length 0x2000 00:23:43.264 TLSTESTn1 : 10.03 3514.50 13.73 0.00 0.00 36337.81 5946.79 59419.31 00:23:43.264 =================================================================================================================== 00:23:43.264 Total : 3514.50 13.73 0.00 0.00 36337.81 5946.79 59419.31 00:23:43.264 0 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 681851 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681851 ']' 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681851 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681851 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681851' 00:23:43.523 killing process with pid 681851 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681851 00:23:43.523 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.523 00:23:43.523 Latency(us) 00:23:43.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.523 =================================================================================================================== 00:23:43.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.523 [2024-07-15 06:52:30.921045] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.523 06:52:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681851 00:23:43.523 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ICBIamNi51 00:23:43.523 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ICBIamNi51 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ICBIamNi51 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ICBIamNi51 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ICBIamNi51' 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=683158 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 683158 /var/tmp/bdevperf.sock 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 683158 ']' 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:43.783 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.783 [2024-07-15 06:52:31.184206] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:43.783 [2024-07-15 06:52:31.184302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683158 ] 00:23:43.783 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.783 [2024-07-15 06:52:31.242436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.783 [2024-07-15 06:52:31.324072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.041 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:44.041 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:44.041 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:44.041 [2024-07-15 06:52:31.654651] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.041 [2024-07-15 06:52:31.654732] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:44.041 [2024-07-15 06:52:31.654745] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ICBIamNi51 00:23:44.298 request: 00:23:44.298 { 00:23:44.298 "name": "TLSTEST", 00:23:44.298 "trtype": "tcp", 00:23:44.298 "traddr": "10.0.0.2", 00:23:44.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.298 "adrfam": "ipv4", 00:23:44.298 "trsvcid": "4420", 00:23:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.298 "psk": "/tmp/tmp.ICBIamNi51", 00:23:44.298 "method": "bdev_nvme_attach_controller", 00:23:44.298 "req_id": 1 00:23:44.298 } 00:23:44.298 Got JSON-RPC error response 00:23:44.298 response: 00:23:44.298 { 00:23:44.298 "code": -1, 00:23:44.298 "message": "Operation not permitted" 00:23:44.298 } 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 683158 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 683158 ']' 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 683158 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 683158 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:44.298 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 683158' 00:23:44.298 killing process with pid 683158 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 683158 00:23:44.299 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.299 00:23:44.299 Latency(us) 00:23:44.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.299 =================================================================================================================== 00:23:44.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 683158 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 681567 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 681567 ']' 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 681567 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 681567 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 681567' 00:23:44.299 killing process with pid 681567 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 681567 00:23:44.299 [2024-07-15 06:52:31.911520] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:44.299 06:52:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 681567 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=683193 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 683193 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 683193 ']' 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:44.556 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.557 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:44.557 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.814 [2024-07-15 06:52:32.192688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:44.814 [2024-07-15 06:52:32.192770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.814 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.814 [2024-07-15 06:52:32.258990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.814 [2024-07-15 06:52:32.344526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.814 [2024-07-15 06:52:32.344586] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.814 [2024-07-15 06:52:32.344599] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.814 [2024-07-15 06:52:32.344610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.814 [2024-07-15 06:52:32.344620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.814 [2024-07-15 06:52:32.344666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ICBIamNi51 00:23:45.072 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:45.330 [2024-07-15 06:52:32.703212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.330 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:45.588 06:52:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:45.588 [2024-07-15 06:52:33.180477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.588 [2024-07-15 06:52:33.180729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.588 06:52:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:45.847 malloc0 00:23:45.847 06:52:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.104 06:52:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:46.361 [2024-07-15 06:52:33.913634] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:46.361 [2024-07-15 06:52:33.913672] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:46.361 [2024-07-15 06:52:33.913720] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:46.361 request: 00:23:46.361 { 00:23:46.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.361 "host": "nqn.2016-06.io.spdk:host1", 00:23:46.361 "psk": "/tmp/tmp.ICBIamNi51", 00:23:46.361 "method": "nvmf_subsystem_add_host", 00:23:46.361 "req_id": 1 00:23:46.361 } 00:23:46.361 Got JSON-RPC error response 00:23:46.361 response: 00:23:46.361 { 00:23:46.361 "code": -32603, 00:23:46.361 "message": "Internal error" 00:23:46.361 } 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 683193 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 683193 ']' 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 683193 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 683193 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 683193' 00:23:46.361 killing process with pid 683193 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 683193 00:23:46.361 06:52:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 683193 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ICBIamNi51 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=683484 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 683484 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 683484 ']' 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:46.618 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.878 [2024-07-15 06:52:34.266646] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:46.878 [2024-07-15 06:52:34.266746] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.878 [2024-07-15 06:52:34.336983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.878 [2024-07-15 06:52:34.425378] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.878 [2024-07-15 06:52:34.425428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.878 [2024-07-15 06:52:34.425452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.878 [2024-07-15 06:52:34.425465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.878 [2024-07-15 06:52:34.425478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.878 [2024-07-15 06:52:34.425507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ICBIamNi51 00:23:47.136 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.395 [2024-07-15 06:52:34.794660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.395 06:52:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.652 06:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.910 [2024-07-15 06:52:35.267960] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.910 [2024-07-15 06:52:35.268250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.910 06:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.910 malloc0 00:23:48.167 06:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.167 06:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:48.425 [2024-07-15 06:52:36.001241] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=683768 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 683768 /var/tmp/bdevperf.sock 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 683768 ']' 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:48.425 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.684 [2024-07-15 06:52:36.063109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:48.684 [2024-07-15 06:52:36.063192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683768 ] 00:23:48.684 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.684 [2024-07-15 06:52:36.121019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.685 [2024-07-15 06:52:36.203555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.944 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:48.944 06:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:48.944 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:23:48.944 [2024-07-15 06:52:36.530900] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.944 [2024-07-15 06:52:36.531027] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:49.201 TLSTESTn1 00:23:49.201 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:49.460 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:49.460 "subsystems": [ 00:23:49.460 { 00:23:49.460 "subsystem": "keyring", 00:23:49.460 "config": [] 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "subsystem": "iobuf", 00:23:49.460 "config": [ 00:23:49.460 { 00:23:49.460 "method": "iobuf_set_options", 00:23:49.460 "params": { 00:23:49.460 "small_pool_count": 8192, 00:23:49.460 "large_pool_count": 1024, 00:23:49.460 "small_bufsize": 8192, 00:23:49.460 "large_bufsize": 135168 00:23:49.460 } 00:23:49.460 } 00:23:49.460 ] 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "subsystem": "sock", 00:23:49.460 "config": [ 00:23:49.460 { 00:23:49.460 "method": "sock_set_default_impl", 00:23:49.460 "params": { 00:23:49.460 "impl_name": "posix" 00:23:49.460 } 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "method": "sock_impl_set_options", 00:23:49.460 "params": { 00:23:49.460 "impl_name": "ssl", 00:23:49.460 "recv_buf_size": 4096, 00:23:49.460 "send_buf_size": 4096, 00:23:49.460 "enable_recv_pipe": true, 00:23:49.460 "enable_quickack": false, 00:23:49.460 "enable_placement_id": 0, 00:23:49.460 "enable_zerocopy_send_server": true, 00:23:49.460 "enable_zerocopy_send_client": false, 00:23:49.460 "zerocopy_threshold": 0, 00:23:49.460 "tls_version": 0, 00:23:49.460 "enable_ktls": false 00:23:49.460 } 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "method": "sock_impl_set_options", 00:23:49.460 "params": { 00:23:49.460 "impl_name": "posix", 00:23:49.460 "recv_buf_size": 2097152, 00:23:49.460 "send_buf_size": 2097152, 00:23:49.460 "enable_recv_pipe": true, 00:23:49.460 "enable_quickack": false, 00:23:49.460 "enable_placement_id": 0, 00:23:49.460 "enable_zerocopy_send_server": true, 00:23:49.460 "enable_zerocopy_send_client": false, 00:23:49.460 "zerocopy_threshold": 0, 00:23:49.460 "tls_version": 0, 00:23:49.460 "enable_ktls": false 00:23:49.460 } 00:23:49.460 } 00:23:49.460 ] 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "subsystem": "vmd", 00:23:49.460 "config": [] 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "subsystem": "accel", 00:23:49.460 "config": [ 00:23:49.460 { 00:23:49.460 "method": "accel_set_options", 00:23:49.460 "params": { 00:23:49.460 "small_cache_size": 128, 00:23:49.460 "large_cache_size": 16, 00:23:49.460 "task_count": 2048, 00:23:49.460 "sequence_count": 2048, 00:23:49.460 "buf_count": 2048 00:23:49.460 } 00:23:49.460 } 00:23:49.460 ] 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "subsystem": "bdev", 00:23:49.460 "config": [ 00:23:49.460 { 00:23:49.460 "method": "bdev_set_options", 00:23:49.460 "params": { 00:23:49.460 "bdev_io_pool_size": 65535, 00:23:49.460 "bdev_io_cache_size": 256, 00:23:49.460 "bdev_auto_examine": true, 00:23:49.460 "iobuf_small_cache_size": 128, 00:23:49.460 "iobuf_large_cache_size": 16 00:23:49.460 } 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "method": "bdev_raid_set_options", 00:23:49.460 "params": { 00:23:49.460 "process_window_size_kb": 1024 00:23:49.460 } 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "method": "bdev_iscsi_set_options", 00:23:49.460 "params": { 00:23:49.460 "timeout_sec": 30 00:23:49.460 } 00:23:49.460 }, 00:23:49.460 { 00:23:49.460 "method": "bdev_nvme_set_options", 00:23:49.460 "params": { 00:23:49.460 "action_on_timeout": "none", 00:23:49.460 "timeout_us": 0, 00:23:49.460 "timeout_admin_us": 0, 00:23:49.461 "keep_alive_timeout_ms": 10000, 00:23:49.461 "arbitration_burst": 0, 00:23:49.461 "low_priority_weight": 0, 00:23:49.461 "medium_priority_weight": 0, 00:23:49.461 "high_priority_weight": 0, 00:23:49.461 "nvme_adminq_poll_period_us": 10000, 00:23:49.461 "nvme_ioq_poll_period_us": 0, 00:23:49.461 "io_queue_requests": 0, 00:23:49.461 "delay_cmd_submit": true, 00:23:49.461 "transport_retry_count": 4, 00:23:49.461 "bdev_retry_count": 3, 00:23:49.461 "transport_ack_timeout": 0, 00:23:49.461 "ctrlr_loss_timeout_sec": 0, 00:23:49.461 "reconnect_delay_sec": 0, 00:23:49.461 "fast_io_fail_timeout_sec": 0, 00:23:49.461 "disable_auto_failback": false, 00:23:49.461 "generate_uuids": false, 00:23:49.461 "transport_tos": 0, 00:23:49.461 "nvme_error_stat": false, 00:23:49.461 "rdma_srq_size": 0, 00:23:49.461 "io_path_stat": false, 00:23:49.461 "allow_accel_sequence": false, 00:23:49.461 "rdma_max_cq_size": 0, 00:23:49.461 "rdma_cm_event_timeout_ms": 0, 00:23:49.461 "dhchap_digests": [ 00:23:49.461 "sha256", 00:23:49.461 "sha384", 00:23:49.461 "sha512" 00:23:49.461 ], 00:23:49.461 "dhchap_dhgroups": [ 00:23:49.461 "null", 00:23:49.461 "ffdhe2048", 00:23:49.461 "ffdhe3072", 00:23:49.461 "ffdhe4096", 00:23:49.461 "ffdhe6144", 00:23:49.461 "ffdhe8192" 00:23:49.461 ] 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "bdev_nvme_set_hotplug", 00:23:49.461 "params": { 00:23:49.461 "period_us": 100000, 00:23:49.461 "enable": false 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "bdev_malloc_create", 00:23:49.461 "params": { 00:23:49.461 "name": "malloc0", 00:23:49.461 "num_blocks": 8192, 00:23:49.461 "block_size": 4096, 00:23:49.461 "physical_block_size": 4096, 00:23:49.461 "uuid": "dba0569c-e570-4e0b-aab9-4362ced6dc63", 00:23:49.461 "optimal_io_boundary": 0 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "bdev_wait_for_examine" 00:23:49.461 } 00:23:49.461 ] 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "subsystem": "nbd", 00:23:49.461 "config": [] 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "subsystem": "scheduler", 00:23:49.461 "config": [ 00:23:49.461 { 00:23:49.461 "method": "framework_set_scheduler", 00:23:49.461 "params": { 00:23:49.461 "name": "static" 00:23:49.461 } 00:23:49.461 } 00:23:49.461 ] 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "subsystem": "nvmf", 00:23:49.461 "config": [ 00:23:49.461 { 00:23:49.461 "method": "nvmf_set_config", 00:23:49.461 "params": { 00:23:49.461 "discovery_filter": "match_any", 00:23:49.461 "admin_cmd_passthru": { 00:23:49.461 "identify_ctrlr": false 00:23:49.461 } 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_set_max_subsystems", 00:23:49.461 "params": { 00:23:49.461 "max_subsystems": 1024 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_set_crdt", 00:23:49.461 "params": { 00:23:49.461 "crdt1": 0, 00:23:49.461 "crdt2": 0, 00:23:49.461 "crdt3": 0 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_create_transport", 00:23:49.461 "params": { 00:23:49.461 "trtype": "TCP", 00:23:49.461 "max_queue_depth": 128, 00:23:49.461 "max_io_qpairs_per_ctrlr": 127, 00:23:49.461 "in_capsule_data_size": 4096, 00:23:49.461 "max_io_size": 131072, 00:23:49.461 "io_unit_size": 131072, 00:23:49.461 "max_aq_depth": 128, 00:23:49.461 "num_shared_buffers": 511, 00:23:49.461 "buf_cache_size": 4294967295, 00:23:49.461 "dif_insert_or_strip": false, 00:23:49.461 "zcopy": false, 00:23:49.461 "c2h_success": false, 00:23:49.461 "sock_priority": 0, 00:23:49.461 "abort_timeout_sec": 1, 00:23:49.461 "ack_timeout": 0, 00:23:49.461 "data_wr_pool_size": 0 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_create_subsystem", 00:23:49.461 "params": { 00:23:49.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.461 "allow_any_host": false, 00:23:49.461 "serial_number": "SPDK00000000000001", 00:23:49.461 "model_number": "SPDK bdev Controller", 00:23:49.461 "max_namespaces": 10, 00:23:49.461 "min_cntlid": 1, 00:23:49.461 "max_cntlid": 65519, 00:23:49.461 "ana_reporting": false 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_subsystem_add_host", 00:23:49.461 "params": { 00:23:49.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.461 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.461 "psk": "/tmp/tmp.ICBIamNi51" 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_subsystem_add_ns", 00:23:49.461 "params": { 00:23:49.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.461 "namespace": { 00:23:49.461 "nsid": 1, 00:23:49.461 "bdev_name": "malloc0", 00:23:49.461 "nguid": "DBA0569CE5704E0BAAB94362CED6DC63", 00:23:49.461 "uuid": "dba0569c-e570-4e0b-aab9-4362ced6dc63", 00:23:49.461 "no_auto_visible": false 00:23:49.461 } 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "method": "nvmf_subsystem_add_listener", 00:23:49.461 "params": { 00:23:49.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.461 "listen_address": { 00:23:49.461 "trtype": "TCP", 00:23:49.461 "adrfam": "IPv4", 00:23:49.461 "traddr": "10.0.0.2", 00:23:49.461 "trsvcid": "4420" 00:23:49.461 }, 00:23:49.461 "secure_channel": true 00:23:49.461 } 00:23:49.461 } 00:23:49.461 ] 00:23:49.461 } 00:23:49.461 ] 00:23:49.461 }' 00:23:49.461 06:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:49.719 06:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:49.719 "subsystems": [ 00:23:49.719 { 00:23:49.719 "subsystem": "keyring", 00:23:49.719 "config": [] 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "subsystem": "iobuf", 00:23:49.719 "config": [ 00:23:49.719 { 00:23:49.719 "method": "iobuf_set_options", 00:23:49.719 "params": { 00:23:49.719 "small_pool_count": 8192, 00:23:49.719 "large_pool_count": 1024, 00:23:49.719 "small_bufsize": 8192, 00:23:49.719 "large_bufsize": 135168 00:23:49.719 } 00:23:49.719 } 00:23:49.719 ] 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "subsystem": "sock", 00:23:49.719 "config": [ 00:23:49.719 { 00:23:49.719 "method": "sock_set_default_impl", 00:23:49.719 "params": { 00:23:49.719 "impl_name": "posix" 00:23:49.719 } 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "method": "sock_impl_set_options", 00:23:49.719 "params": { 00:23:49.719 "impl_name": "ssl", 00:23:49.719 "recv_buf_size": 4096, 00:23:49.719 "send_buf_size": 4096, 00:23:49.719 "enable_recv_pipe": true, 00:23:49.719 "enable_quickack": false, 00:23:49.719 "enable_placement_id": 0, 00:23:49.719 "enable_zerocopy_send_server": true, 00:23:49.719 "enable_zerocopy_send_client": false, 00:23:49.719 "zerocopy_threshold": 0, 00:23:49.719 "tls_version": 0, 00:23:49.719 "enable_ktls": false 00:23:49.719 } 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "method": "sock_impl_set_options", 00:23:49.719 "params": { 00:23:49.719 "impl_name": "posix", 00:23:49.719 "recv_buf_size": 2097152, 00:23:49.719 "send_buf_size": 2097152, 00:23:49.719 "enable_recv_pipe": true, 00:23:49.719 "enable_quickack": false, 00:23:49.719 "enable_placement_id": 0, 00:23:49.719 "enable_zerocopy_send_server": true, 00:23:49.719 "enable_zerocopy_send_client": false, 00:23:49.719 "zerocopy_threshold": 0, 00:23:49.719 "tls_version": 0, 00:23:49.719 "enable_ktls": false 00:23:49.719 } 00:23:49.719 } 00:23:49.719 ] 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "subsystem": "vmd", 00:23:49.719 "config": [] 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "subsystem": "accel", 00:23:49.719 "config": [ 00:23:49.719 { 00:23:49.719 "method": "accel_set_options", 00:23:49.719 "params": { 00:23:49.719 "small_cache_size": 128, 00:23:49.719 "large_cache_size": 16, 00:23:49.719 "task_count": 2048, 00:23:49.719 "sequence_count": 2048, 00:23:49.719 "buf_count": 2048 00:23:49.719 } 00:23:49.719 } 00:23:49.719 ] 00:23:49.719 }, 00:23:49.719 { 00:23:49.719 "subsystem": "bdev", 00:23:49.719 "config": [ 00:23:49.719 { 00:23:49.719 "method": "bdev_set_options", 00:23:49.719 "params": { 00:23:49.719 "bdev_io_pool_size": 65535, 00:23:49.719 "bdev_io_cache_size": 256, 00:23:49.719 "bdev_auto_examine": true, 00:23:49.719 "iobuf_small_cache_size": 128, 00:23:49.719 "iobuf_large_cache_size": 16 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_raid_set_options", 00:23:49.720 "params": { 00:23:49.720 "process_window_size_kb": 1024 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_iscsi_set_options", 00:23:49.720 "params": { 00:23:49.720 "timeout_sec": 30 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_nvme_set_options", 00:23:49.720 "params": { 00:23:49.720 "action_on_timeout": "none", 00:23:49.720 "timeout_us": 0, 00:23:49.720 "timeout_admin_us": 0, 00:23:49.720 "keep_alive_timeout_ms": 10000, 00:23:49.720 "arbitration_burst": 0, 00:23:49.720 "low_priority_weight": 0, 00:23:49.720 "medium_priority_weight": 0, 00:23:49.720 "high_priority_weight": 0, 00:23:49.720 "nvme_adminq_poll_period_us": 10000, 00:23:49.720 "nvme_ioq_poll_period_us": 0, 00:23:49.720 "io_queue_requests": 512, 00:23:49.720 "delay_cmd_submit": true, 00:23:49.720 "transport_retry_count": 4, 00:23:49.720 "bdev_retry_count": 3, 00:23:49.720 "transport_ack_timeout": 0, 00:23:49.720 "ctrlr_loss_timeout_sec": 0, 00:23:49.720 "reconnect_delay_sec": 0, 00:23:49.720 "fast_io_fail_timeout_sec": 0, 00:23:49.720 "disable_auto_failback": false, 00:23:49.720 "generate_uuids": false, 00:23:49.720 "transport_tos": 0, 00:23:49.720 "nvme_error_stat": false, 00:23:49.720 "rdma_srq_size": 0, 00:23:49.720 "io_path_stat": false, 00:23:49.720 "allow_accel_sequence": false, 00:23:49.720 "rdma_max_cq_size": 0, 00:23:49.720 "rdma_cm_event_timeout_ms": 0, 00:23:49.720 "dhchap_digests": [ 00:23:49.720 "sha256", 00:23:49.720 "sha384", 00:23:49.720 "sha512" 00:23:49.720 ], 00:23:49.720 "dhchap_dhgroups": [ 00:23:49.720 "null", 00:23:49.720 "ffdhe2048", 00:23:49.720 "ffdhe3072", 00:23:49.720 "ffdhe4096", 00:23:49.720 "ffdhe6144", 00:23:49.720 "ffdhe8192" 00:23:49.720 ] 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_nvme_attach_controller", 00:23:49.720 "params": { 00:23:49.720 "name": "TLSTEST", 00:23:49.720 "trtype": "TCP", 00:23:49.720 "adrfam": "IPv4", 00:23:49.720 "traddr": "10.0.0.2", 00:23:49.720 "trsvcid": "4420", 00:23:49.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.720 "prchk_reftag": false, 00:23:49.720 "prchk_guard": false, 00:23:49.720 "ctrlr_loss_timeout_sec": 0, 00:23:49.720 "reconnect_delay_sec": 0, 00:23:49.720 "fast_io_fail_timeout_sec": 0, 00:23:49.720 "psk": "/tmp/tmp.ICBIamNi51", 00:23:49.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.720 "hdgst": false, 00:23:49.720 "ddgst": false 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_nvme_set_hotplug", 00:23:49.720 "params": { 00:23:49.720 "period_us": 100000, 00:23:49.720 "enable": false 00:23:49.720 } 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "method": "bdev_wait_for_examine" 00:23:49.720 } 00:23:49.720 ] 00:23:49.720 }, 00:23:49.720 { 00:23:49.720 "subsystem": "nbd", 00:23:49.720 "config": [] 00:23:49.720 } 00:23:49.720 ] 00:23:49.720 }' 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 683768 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 683768 ']' 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 683768 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 683768 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 683768' 00:23:49.720 killing process with pid 683768 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 683768 00:23:49.720 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.720 00:23:49.720 Latency(us) 00:23:49.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.720 =================================================================================================================== 00:23:49.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.720 [2024-07-15 06:52:37.282693] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:49.720 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 683768 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 683484 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 683484 ']' 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 683484 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 683484 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 683484' 00:23:49.979 killing process with pid 683484 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 683484 00:23:49.979 [2024-07-15 06:52:37.509364] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.979 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 683484 00:23:50.239 06:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:50.239 06:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.239 06:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:50.239 "subsystems": [ 00:23:50.239 { 00:23:50.239 "subsystem": "keyring", 00:23:50.239 "config": [] 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "subsystem": "iobuf", 00:23:50.239 "config": [ 00:23:50.239 { 00:23:50.239 "method": "iobuf_set_options", 00:23:50.239 "params": { 00:23:50.239 "small_pool_count": 8192, 00:23:50.239 "large_pool_count": 1024, 00:23:50.239 "small_bufsize": 8192, 00:23:50.239 "large_bufsize": 135168 00:23:50.239 } 00:23:50.239 } 00:23:50.239 ] 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "subsystem": "sock", 00:23:50.239 "config": [ 00:23:50.239 { 00:23:50.239 "method": "sock_set_default_impl", 00:23:50.239 "params": { 00:23:50.239 "impl_name": "posix" 00:23:50.239 } 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "method": "sock_impl_set_options", 00:23:50.239 "params": { 00:23:50.239 "impl_name": "ssl", 00:23:50.239 "recv_buf_size": 4096, 00:23:50.239 "send_buf_size": 4096, 00:23:50.239 "enable_recv_pipe": true, 00:23:50.239 "enable_quickack": false, 00:23:50.239 "enable_placement_id": 0, 00:23:50.239 "enable_zerocopy_send_server": true, 00:23:50.239 "enable_zerocopy_send_client": false, 00:23:50.239 "zerocopy_threshold": 0, 00:23:50.239 "tls_version": 0, 00:23:50.239 "enable_ktls": false 00:23:50.239 } 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "method": "sock_impl_set_options", 00:23:50.239 "params": { 00:23:50.239 "impl_name": "posix", 00:23:50.239 "recv_buf_size": 2097152, 00:23:50.239 "send_buf_size": 2097152, 00:23:50.239 "enable_recv_pipe": true, 00:23:50.239 "enable_quickack": false, 00:23:50.239 "enable_placement_id": 0, 00:23:50.239 "enable_zerocopy_send_server": true, 00:23:50.239 "enable_zerocopy_send_client": false, 00:23:50.239 "zerocopy_threshold": 0, 00:23:50.239 "tls_version": 0, 00:23:50.239 "enable_ktls": false 00:23:50.239 } 00:23:50.239 } 00:23:50.239 ] 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "subsystem": "vmd", 00:23:50.239 "config": [] 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "subsystem": "accel", 00:23:50.239 "config": [ 00:23:50.239 { 00:23:50.239 "method": "accel_set_options", 00:23:50.239 "params": { 00:23:50.239 "small_cache_size": 128, 00:23:50.239 "large_cache_size": 16, 00:23:50.239 "task_count": 2048, 00:23:50.239 "sequence_count": 2048, 00:23:50.239 "buf_count": 2048 00:23:50.239 } 00:23:50.239 } 00:23:50.239 ] 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "subsystem": "bdev", 00:23:50.239 "config": [ 00:23:50.239 { 00:23:50.239 "method": "bdev_set_options", 00:23:50.239 "params": { 00:23:50.239 "bdev_io_pool_size": 65535, 00:23:50.239 "bdev_io_cache_size": 256, 00:23:50.239 "bdev_auto_examine": true, 00:23:50.239 "iobuf_small_cache_size": 128, 00:23:50.239 "iobuf_large_cache_size": 16 00:23:50.239 } 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "method": "bdev_raid_set_options", 00:23:50.239 "params": { 00:23:50.239 "process_window_size_kb": 1024 00:23:50.239 } 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "method": "bdev_iscsi_set_options", 00:23:50.240 "params": { 00:23:50.240 "timeout_sec": 30 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "bdev_nvme_set_options", 00:23:50.240 "params": { 00:23:50.240 "action_on_timeout": "none", 00:23:50.240 "timeout_us": 0, 00:23:50.240 "timeout_admin_us": 0, 00:23:50.240 "keep_alive_timeout_ms": 10000, 00:23:50.240 "arbitration_burst": 0, 00:23:50.240 "low_priority_weight": 0, 00:23:50.240 "medium_priority_weight": 0, 00:23:50.240 "high_priority_weight": 0, 00:23:50.240 "nvme_adminq_poll_period_us": 10000, 00:23:50.240 "nvme_ioq_poll_period_us": 0, 00:23:50.240 "io_queue_requests": 0, 00:23:50.240 "delay_cmd_submit": true, 00:23:50.240 "transport_retry_count": 4, 00:23:50.240 "bdev_retry_count": 3, 00:23:50.240 "transport_ack_timeout": 0, 00:23:50.240 "ctrlr_loss_timeout_sec": 0, 00:23:50.240 "reconnect_delay_sec": 0, 00:23:50.240 "fast_io_fail_timeout_sec": 0, 00:23:50.240 "disable_auto_failback": false, 00:23:50.240 "generate_uuids": false, 00:23:50.240 "transport_tos": 0, 00:23:50.240 "nvme_error_stat": false, 00:23:50.240 "rdma_srq_size": 0, 00:23:50.240 "io_path_stat": false, 00:23:50.240 "allow_accel_sequence": false, 00:23:50.240 "rdma_max_cq_size": 0, 00:23:50.240 "rdma_cm_event_timeout_ms": 0, 00:23:50.240 "dhchap_digests": [ 00:23:50.240 "sha256", 00:23:50.240 "sha384", 00:23:50.240 "sha512" 00:23:50.240 ], 00:23:50.240 "dhchap_dhgroups": [ 00:23:50.240 "null", 00:23:50.240 "ffdhe2048", 00:23:50.240 "ffdhe3072", 00:23:50.240 "ffdhe4096", 00:23:50.240 "ffdhe6144", 00:23:50.240 "ffdhe8192" 00:23:50.240 ] 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "bdev_nvme_set_hotplug", 00:23:50.240 "params": { 00:23:50.240 "period_us": 100000, 00:23:50.240 "enable": false 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "bdev_malloc_create", 00:23:50.240 "params": { 00:23:50.240 "name": "malloc0", 00:23:50.240 "num_blocks": 8192, 00:23:50.240 "block_size": 4096, 00:23:50.240 "physical_block_size": 4096, 00:23:50.240 "uuid": "dba0569c-e570-4e0b-aab9-4362ced6dc63", 00:23:50.240 "optimal_io_boundary": 0 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "bdev_wait_for_examine" 00:23:50.240 } 00:23:50.240 ] 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "subsystem": "nbd", 00:23:50.240 "config": [] 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "subsystem": "scheduler", 00:23:50.240 "config": [ 00:23:50.240 { 00:23:50.240 "method": "framework_set_scheduler", 00:23:50.240 "params": { 00:23:50.240 "name": "static" 00:23:50.240 } 00:23:50.240 } 00:23:50.240 ] 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "subsystem": "nvmf", 00:23:50.240 "config": [ 00:23:50.240 { 00:23:50.240 "method": "nvmf_set_config", 00:23:50.240 "params": { 00:23:50.240 "discovery_filter": "match_any", 00:23:50.240 "admin_cmd_passthru": { 00:23:50.240 "identify_ctrlr": false 00:23:50.240 } 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_set_max_subsystems", 00:23:50.240 "params": { 00:23:50.240 "max_subsystems": 1024 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_set_crdt", 00:23:50.240 "params": { 00:23:50.240 "crdt1": 0, 00:23:50.240 "crdt2": 0, 00:23:50.240 "crdt3": 0 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_create_transport", 00:23:50.240 "params": { 00:23:50.240 "trtype": "TCP", 00:23:50.240 "max_queue_depth": 128, 00:23:50.240 "max_io_qpairs_per_ctrlr": 127, 00:23:50.240 "in_capsule_data_size": 4096, 00:23:50.240 "max_io_size": 131072, 00:23:50.240 "io_unit_size": 131072, 00:23:50.240 "max_aq_depth": 128, 00:23:50.240 "num_shared_buffers": 511, 00:23:50.240 "buf_cache_size": 4294967295, 00:23:50.240 "dif_insert_or_strip": false, 00:23:50.240 "zcopy": false, 00:23:50.240 "c2h_success": false, 00:23:50.240 "sock_priority": 0, 00:23:50.240 "abort_timeout_sec": 1, 00:23:50.240 "ack_timeout": 0, 00:23:50.240 "data_wr_pool_size": 0 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_create_subsystem", 00:23:50.240 "params": { 00:23:50.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.240 "allow_any_host": false, 00:23:50.240 "serial_number": "SPDK00000000000001", 00:23:50.240 "model_number": "SPDK bdev Controller", 00:23:50.240 "max_namespaces": 10, 00:23:50.240 "min_cntlid": 1, 00:23:50.240 "max_cntlid": 65519, 00:23:50.240 "ana_reporting": false 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_subsystem_add_host", 00:23:50.240 "params": { 00:23:50.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.240 "host": "nqn.2016-06.io.spdk:host1", 00:23:50.240 "psk": "/tmp/tmp.ICBIamNi51" 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_subsystem_add_ns", 00:23:50.240 "params": { 00:23:50.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.240 "namespace": { 00:23:50.240 "nsid": 1, 00:23:50.240 "bdev_name": "malloc0", 00:23:50.240 "nguid": "DBA0569CE5704E0BAAB94362CED6DC63", 00:23:50.240 "uuid": "dba0569c-e570-4e0b-aab9-4362ced6dc63", 00:23:50.240 "no_auto_visible": false 00:23:50.240 } 00:23:50.240 } 00:23:50.240 }, 00:23:50.240 { 00:23:50.240 "method": "nvmf_subsystem_add_listener", 00:23:50.240 "params": { 00:23:50.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.240 "listen_address": { 00:23:50.240 "trtype": "TCP", 00:23:50.240 "adrfam": "IPv4", 00:23:50.240 "traddr": "10.0.0.2", 00:23:50.240 "trsvcid": "4420" 00:23:50.240 }, 00:23:50.240 "secure_channel": true 00:23:50.240 } 00:23:50.240 } 00:23:50.240 ] 00:23:50.240 } 00:23:50.240 ] 00:23:50.240 }' 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=683918 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 683918 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 683918 ']' 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.240 06:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.240 [2024-07-15 06:52:37.820201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:50.240 [2024-07-15 06:52:37.820298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.500 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.500 [2024-07-15 06:52:37.894324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.500 [2024-07-15 06:52:37.984089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.500 [2024-07-15 06:52:37.984151] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.500 [2024-07-15 06:52:37.984176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.500 [2024-07-15 06:52:37.984191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.500 [2024-07-15 06:52:37.984204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.500 [2024-07-15 06:52:37.984290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.759 [2024-07-15 06:52:38.216975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.759 [2024-07-15 06:52:38.232919] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.759 [2024-07-15 06:52:38.248972] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.759 [2024-07-15 06:52:38.257083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=684071 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 684071 /var/tmp/bdevperf.sock 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 684071 ']' 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.327 06:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:51.327 "subsystems": [ 00:23:51.327 { 00:23:51.327 "subsystem": "keyring", 00:23:51.327 "config": [] 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "subsystem": "iobuf", 00:23:51.327 "config": [ 00:23:51.327 { 00:23:51.327 "method": "iobuf_set_options", 00:23:51.327 "params": { 00:23:51.327 "small_pool_count": 8192, 00:23:51.327 "large_pool_count": 1024, 00:23:51.327 "small_bufsize": 8192, 00:23:51.327 "large_bufsize": 135168 00:23:51.327 } 00:23:51.327 } 00:23:51.327 ] 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "subsystem": "sock", 00:23:51.327 "config": [ 00:23:51.327 { 00:23:51.327 "method": "sock_set_default_impl", 00:23:51.327 "params": { 00:23:51.327 "impl_name": "posix" 00:23:51.327 } 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "method": "sock_impl_set_options", 00:23:51.327 "params": { 00:23:51.327 "impl_name": "ssl", 00:23:51.327 "recv_buf_size": 4096, 00:23:51.327 "send_buf_size": 4096, 00:23:51.327 "enable_recv_pipe": true, 00:23:51.327 "enable_quickack": false, 00:23:51.327 "enable_placement_id": 0, 00:23:51.327 "enable_zerocopy_send_server": true, 00:23:51.327 "enable_zerocopy_send_client": false, 00:23:51.327 "zerocopy_threshold": 0, 00:23:51.327 "tls_version": 0, 00:23:51.327 "enable_ktls": false 00:23:51.327 } 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "method": "sock_impl_set_options", 00:23:51.327 "params": { 00:23:51.327 "impl_name": "posix", 00:23:51.327 "recv_buf_size": 2097152, 00:23:51.327 "send_buf_size": 2097152, 00:23:51.327 "enable_recv_pipe": true, 00:23:51.327 "enable_quickack": false, 00:23:51.327 "enable_placement_id": 0, 00:23:51.327 "enable_zerocopy_send_server": true, 00:23:51.327 "enable_zerocopy_send_client": false, 00:23:51.327 "zerocopy_threshold": 0, 00:23:51.327 "tls_version": 0, 00:23:51.327 "enable_ktls": false 00:23:51.327 } 00:23:51.327 } 00:23:51.327 ] 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "subsystem": "vmd", 00:23:51.327 "config": [] 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "subsystem": "accel", 00:23:51.327 "config": [ 00:23:51.327 { 00:23:51.327 "method": "accel_set_options", 00:23:51.327 "params": { 00:23:51.327 "small_cache_size": 128, 00:23:51.327 "large_cache_size": 16, 00:23:51.327 "task_count": 2048, 00:23:51.327 "sequence_count": 2048, 00:23:51.327 "buf_count": 2048 00:23:51.327 } 00:23:51.327 } 00:23:51.327 ] 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "subsystem": "bdev", 00:23:51.327 "config": [ 00:23:51.327 { 00:23:51.327 "method": "bdev_set_options", 00:23:51.327 "params": { 00:23:51.327 "bdev_io_pool_size": 65535, 00:23:51.327 "bdev_io_cache_size": 256, 00:23:51.327 "bdev_auto_examine": true, 00:23:51.327 "iobuf_small_cache_size": 128, 00:23:51.327 "iobuf_large_cache_size": 16 00:23:51.327 } 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "method": "bdev_raid_set_options", 00:23:51.327 "params": { 00:23:51.327 "process_window_size_kb": 1024 00:23:51.327 } 00:23:51.327 }, 00:23:51.327 { 00:23:51.327 "method": "bdev_iscsi_set_options", 00:23:51.327 "params": { 00:23:51.327 "timeout_sec": 30 00:23:51.327 } 00:23:51.328 }, 00:23:51.328 { 00:23:51.328 "method": "bdev_nvme_set_options", 00:23:51.328 "params": { 00:23:51.328 "action_on_timeout": "none", 00:23:51.328 "timeout_us": 0, 00:23:51.328 "timeout_admin_us": 0, 00:23:51.328 "keep_alive_timeout_ms": 10000, 00:23:51.328 "arbitration_burst": 0, 00:23:51.328 "low_priority_weight": 0, 00:23:51.328 "medium_priority_weight": 0, 00:23:51.328 "high_priority_weight": 0, 00:23:51.328 "nvme_adminq_poll_period_us": 10000, 00:23:51.328 "nvme_ioq_poll_period_us": 0, 00:23:51.328 "io_queue_requests": 512, 00:23:51.328 "delay_cmd_submit": true, 00:23:51.328 "transport_retry_count": 4, 00:23:51.328 "bdev_retry_count": 3, 00:23:51.328 "transport_ack_timeout": 0, 00:23:51.328 "ctrlr_loss_timeout_sec": 0, 00:23:51.328 "reconnect_delay_sec": 0, 00:23:51.328 "fast_io_fail_timeout_sec": 0, 00:23:51.328 "disable_auto_failback": false, 00:23:51.328 "generate_uuids": false, 00:23:51.328 "transport_tos": 0, 00:23:51.328 "nvme_error_stat": false, 00:23:51.328 "rdma_srq_size": 0, 00:23:51.328 "io_path_stat": false, 00:23:51.328 "allow_accel_sequence": false, 00:23:51.328 "rdma_max_cq_size": 0, 00:23:51.328 "rdma_cm_event_timeout_ms": 0, 00:23:51.328 "dhchap_digests": [ 00:23:51.328 "sha256", 00:23:51.328 "sha384", 00:23:51.328 "sha512" 00:23:51.328 ], 00:23:51.328 "dhchap_dhgroups": [ 00:23:51.328 "null", 00:23:51.328 "ffdhe2048", 00:23:51.328 "ffdhe3072", 00:23:51.328 "ffdhe4096", 00:23:51.328 "ffd 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.328 he6144", 00:23:51.328 "ffdhe8192" 00:23:51.328 ] 00:23:51.328 } 00:23:51.328 }, 00:23:51.328 { 00:23:51.328 "method": "bdev_nvme_attach_controller", 00:23:51.328 "params": { 00:23:51.328 "name": "TLSTEST", 00:23:51.328 "trtype": "TCP", 00:23:51.328 "adrfam": "IPv4", 00:23:51.328 "traddr": "10.0.0.2", 00:23:51.328 "trsvcid": "4420", 00:23:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.328 "prchk_reftag": false, 00:23:51.328 "prchk_guard": false, 00:23:51.328 "ctrlr_loss_timeout_sec": 0, 00:23:51.328 "reconnect_delay_sec": 0, 00:23:51.328 "fast_io_fail_timeout_sec": 0, 00:23:51.328 "psk": "/tmp/tmp.ICBIamNi51", 00:23:51.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.328 "hdgst": false, 00:23:51.328 "ddgst": false 00:23:51.328 } 00:23:51.328 }, 00:23:51.328 { 00:23:51.328 "method": "bdev_nvme_set_hotplug", 00:23:51.328 "params": { 00:23:51.328 "period_us": 100000, 00:23:51.328 "enable": false 00:23:51.328 } 00:23:51.328 }, 00:23:51.328 { 00:23:51.328 "method": "bdev_wait_for_examine" 00:23:51.328 } 00:23:51.328 ] 00:23:51.328 }, 00:23:51.328 { 00:23:51.328 "subsystem": "nbd", 00:23:51.328 "config": [] 00:23:51.328 } 00:23:51.328 ] 00:23:51.328 }' 00:23:51.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.328 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.328 06:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.328 [2024-07-15 06:52:38.830307] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:51.328 [2024-07-15 06:52:38.830398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684071 ] 00:23:51.328 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.328 [2024-07-15 06:52:38.891451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.588 [2024-07-15 06:52:38.979328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.588 [2024-07-15 06:52:39.149587] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.588 [2024-07-15 06:52:39.149704] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:52.527 06:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.527 06:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:52.527 06:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:52.527 Running I/O for 10 seconds... 00:24:02.565 00:24:02.565 Latency(us) 00:24:02.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.565 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:02.565 Verification LBA range: start 0x0 length 0x2000 00:24:02.565 TLSTESTn1 : 10.03 3570.00 13.95 0.00 0.00 35772.99 10291.58 67574.90 00:24:02.565 =================================================================================================================== 00:24:02.565 Total : 3570.00 13.95 0.00 0.00 35772.99 10291.58 67574.90 00:24:02.565 0 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 684071 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 684071 ']' 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 684071 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 684071 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 684071' 00:24:02.565 killing process with pid 684071 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 684071 00:24:02.565 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.565 00:24:02.565 Latency(us) 00:24:02.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.565 =================================================================================================================== 00:24:02.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.565 [2024-07-15 06:52:49.990523] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:02.565 06:52:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 684071 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 683918 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 683918 ']' 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 683918 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 683918 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 683918' 00:24:02.823 killing process with pid 683918 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 683918 00:24:02.823 [2024-07-15 06:52:50.245000] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:02.823 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 683918 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=685508 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 685508 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 685508 ']' 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.081 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.081 [2024-07-15 06:52:50.535020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:03.081 [2024-07-15 06:52:50.535096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.081 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.081 [2024-07-15 06:52:50.604692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.081 [2024-07-15 06:52:50.692469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.081 [2024-07-15 06:52:50.692530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.081 [2024-07-15 06:52:50.692565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.081 [2024-07-15 06:52:50.692580] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.081 [2024-07-15 06:52:50.692591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.081 [2024-07-15 06:52:50.692620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ICBIamNi51 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ICBIamNi51 00:24:03.339 06:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:03.597 [2024-07-15 06:52:51.069197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.597 06:52:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:03.854 06:52:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:04.112 [2024-07-15 06:52:51.558527] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.112 [2024-07-15 06:52:51.558784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.112 06:52:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:04.369 malloc0 00:24:04.370 06:52:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:04.627 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ICBIamNi51 00:24:04.885 [2024-07-15 06:52:52.287596] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=685683 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 685683 /var/tmp/bdevperf.sock 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 685683 ']' 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:04.885 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.885 [2024-07-15 06:52:52.351633] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:04.886 [2024-07-15 06:52:52.351717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685683 ] 00:24:04.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.886 [2024-07-15 06:52:52.415901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.143 [2024-07-15 06:52:52.506384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.143 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:05.143 06:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:05.143 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ICBIamNi51 00:24:05.401 06:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:05.659 [2024-07-15 06:52:53.100700] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.659 nvme0n1 00:24:05.659 06:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.917 Running I/O for 1 seconds... 00:24:06.852 00:24:06.852 Latency(us) 00:24:06.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:06.852 Verification LBA range: start 0x0 length 0x2000 00:24:06.852 nvme0n1 : 1.03 3239.27 12.65 0.00 0.00 38894.23 8883.77 53982.25 00:24:06.852 =================================================================================================================== 00:24:06.852 Total : 3239.27 12.65 0.00 0.00 38894.23 8883.77 53982.25 00:24:06.852 0 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 685683 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 685683 ']' 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 685683 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 685683 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 685683' 00:24:06.852 killing process with pid 685683 00:24:06.852 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 685683 00:24:06.852 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.852 00:24:06.852 Latency(us) 00:24:06.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.852 =================================================================================================================== 00:24:06.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.853 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 685683 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 685508 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 685508 ']' 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 685508 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 685508 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 685508' 00:24:07.112 killing process with pid 685508 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 685508 00:24:07.112 [2024-07-15 06:52:54.629837] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:07.112 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 685508 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=685958 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 685958 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 685958 ']' 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.370 06:52:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.370 [2024-07-15 06:52:54.931959] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:07.370 [2024-07-15 06:52:54.932056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.370 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.628 [2024-07-15 06:52:55.001065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.628 [2024-07-15 06:52:55.089351] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.628 [2024-07-15 06:52:55.089417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.628 [2024-07-15 06:52:55.089441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.628 [2024-07-15 06:52:55.089452] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.628 [2024-07-15 06:52:55.089461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.628 [2024-07-15 06:52:55.089494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.628 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.628 [2024-07-15 06:52:55.231728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.885 malloc0 00:24:07.885 [2024-07-15 06:52:55.264029] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.885 [2024-07-15 06:52:55.264303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.885 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.885 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=686103 00:24:07.885 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 686103 /var/tmp/bdevperf.sock 00:24:07.885 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 686103 ']' 00:24:07.885 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.886 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.886 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:07.886 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.886 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.886 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.886 [2024-07-15 06:52:55.337471] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:07.886 [2024-07-15 06:52:55.337557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686103 ] 00:24:07.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.886 [2024-07-15 06:52:55.397524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.886 [2024-07-15 06:52:55.484136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.143 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.143 06:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:08.143 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ICBIamNi51 00:24:08.401 06:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:08.659 [2024-07-15 06:52:56.060645] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.659 nvme0n1 00:24:08.659 06:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:08.659 Running I/O for 1 seconds... 00:24:10.037 00:24:10.037 Latency(us) 00:24:10.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.037 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:10.037 Verification LBA range: start 0x0 length 0x2000 00:24:10.037 nvme0n1 : 1.03 3386.58 13.23 0.00 0.00 37238.39 6456.51 50486.99 00:24:10.037 =================================================================================================================== 00:24:10.037 Total : 3386.58 13.23 0.00 0.00 37238.39 6456.51 50486.99 00:24:10.037 0 00:24:10.037 06:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:10.037 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.037 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.037 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.037 06:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:10.037 "subsystems": [ 00:24:10.037 { 00:24:10.037 "subsystem": "keyring", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "keyring_file_add_key", 00:24:10.037 "params": { 00:24:10.037 "name": "key0", 00:24:10.037 "path": "/tmp/tmp.ICBIamNi51" 00:24:10.037 } 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "iobuf", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "iobuf_set_options", 00:24:10.037 "params": { 00:24:10.037 "small_pool_count": 8192, 00:24:10.037 "large_pool_count": 1024, 00:24:10.037 "small_bufsize": 8192, 00:24:10.037 "large_bufsize": 135168 00:24:10.037 } 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "sock", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "sock_set_default_impl", 00:24:10.037 "params": { 00:24:10.037 "impl_name": "posix" 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "sock_impl_set_options", 00:24:10.037 "params": { 00:24:10.037 "impl_name": "ssl", 00:24:10.037 "recv_buf_size": 4096, 00:24:10.037 "send_buf_size": 4096, 00:24:10.037 "enable_recv_pipe": true, 00:24:10.037 "enable_quickack": false, 00:24:10.037 "enable_placement_id": 0, 00:24:10.037 "enable_zerocopy_send_server": true, 00:24:10.037 "enable_zerocopy_send_client": false, 00:24:10.037 "zerocopy_threshold": 0, 00:24:10.037 "tls_version": 0, 00:24:10.037 "enable_ktls": false 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "sock_impl_set_options", 00:24:10.037 "params": { 00:24:10.037 "impl_name": "posix", 00:24:10.037 "recv_buf_size": 2097152, 00:24:10.037 "send_buf_size": 2097152, 00:24:10.037 "enable_recv_pipe": true, 00:24:10.037 "enable_quickack": false, 00:24:10.037 "enable_placement_id": 0, 00:24:10.037 "enable_zerocopy_send_server": true, 00:24:10.037 "enable_zerocopy_send_client": false, 00:24:10.037 "zerocopy_threshold": 0, 00:24:10.037 "tls_version": 0, 00:24:10.037 "enable_ktls": false 00:24:10.037 } 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "vmd", 00:24:10.037 "config": [] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "accel", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "accel_set_options", 00:24:10.037 "params": { 00:24:10.037 "small_cache_size": 128, 00:24:10.037 "large_cache_size": 16, 00:24:10.037 "task_count": 2048, 00:24:10.037 "sequence_count": 2048, 00:24:10.037 "buf_count": 2048 00:24:10.037 } 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "bdev", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "bdev_set_options", 00:24:10.037 "params": { 00:24:10.037 "bdev_io_pool_size": 65535, 00:24:10.037 "bdev_io_cache_size": 256, 00:24:10.037 "bdev_auto_examine": true, 00:24:10.037 "iobuf_small_cache_size": 128, 00:24:10.037 "iobuf_large_cache_size": 16 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_raid_set_options", 00:24:10.037 "params": { 00:24:10.037 "process_window_size_kb": 1024 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_iscsi_set_options", 00:24:10.037 "params": { 00:24:10.037 "timeout_sec": 30 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_nvme_set_options", 00:24:10.037 "params": { 00:24:10.037 "action_on_timeout": "none", 00:24:10.037 "timeout_us": 0, 00:24:10.037 "timeout_admin_us": 0, 00:24:10.037 "keep_alive_timeout_ms": 10000, 00:24:10.037 "arbitration_burst": 0, 00:24:10.037 "low_priority_weight": 0, 00:24:10.037 "medium_priority_weight": 0, 00:24:10.037 "high_priority_weight": 0, 00:24:10.037 "nvme_adminq_poll_period_us": 10000, 00:24:10.037 "nvme_ioq_poll_period_us": 0, 00:24:10.037 "io_queue_requests": 0, 00:24:10.037 "delay_cmd_submit": true, 00:24:10.037 "transport_retry_count": 4, 00:24:10.037 "bdev_retry_count": 3, 00:24:10.037 "transport_ack_timeout": 0, 00:24:10.037 "ctrlr_loss_timeout_sec": 0, 00:24:10.037 "reconnect_delay_sec": 0, 00:24:10.037 "fast_io_fail_timeout_sec": 0, 00:24:10.037 "disable_auto_failback": false, 00:24:10.037 "generate_uuids": false, 00:24:10.037 "transport_tos": 0, 00:24:10.037 "nvme_error_stat": false, 00:24:10.037 "rdma_srq_size": 0, 00:24:10.037 "io_path_stat": false, 00:24:10.037 "allow_accel_sequence": false, 00:24:10.037 "rdma_max_cq_size": 0, 00:24:10.037 "rdma_cm_event_timeout_ms": 0, 00:24:10.037 "dhchap_digests": [ 00:24:10.037 "sha256", 00:24:10.037 "sha384", 00:24:10.037 "sha512" 00:24:10.037 ], 00:24:10.037 "dhchap_dhgroups": [ 00:24:10.037 "null", 00:24:10.037 "ffdhe2048", 00:24:10.037 "ffdhe3072", 00:24:10.037 "ffdhe4096", 00:24:10.037 "ffdhe6144", 00:24:10.037 "ffdhe8192" 00:24:10.037 ] 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_nvme_set_hotplug", 00:24:10.037 "params": { 00:24:10.037 "period_us": 100000, 00:24:10.037 "enable": false 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_malloc_create", 00:24:10.037 "params": { 00:24:10.037 "name": "malloc0", 00:24:10.037 "num_blocks": 8192, 00:24:10.037 "block_size": 4096, 00:24:10.037 "physical_block_size": 4096, 00:24:10.037 "uuid": "92a6e958-9bc7-4d1a-b41b-22d1904e43a9", 00:24:10.037 "optimal_io_boundary": 0 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "bdev_wait_for_examine" 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "nbd", 00:24:10.037 "config": [] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "scheduler", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "framework_set_scheduler", 00:24:10.037 "params": { 00:24:10.037 "name": "static" 00:24:10.037 } 00:24:10.037 } 00:24:10.037 ] 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "subsystem": "nvmf", 00:24:10.037 "config": [ 00:24:10.037 { 00:24:10.037 "method": "nvmf_set_config", 00:24:10.037 "params": { 00:24:10.037 "discovery_filter": "match_any", 00:24:10.037 "admin_cmd_passthru": { 00:24:10.037 "identify_ctrlr": false 00:24:10.037 } 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "nvmf_set_max_subsystems", 00:24:10.037 "params": { 00:24:10.037 "max_subsystems": 1024 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "nvmf_set_crdt", 00:24:10.037 "params": { 00:24:10.037 "crdt1": 0, 00:24:10.037 "crdt2": 0, 00:24:10.037 "crdt3": 0 00:24:10.037 } 00:24:10.037 }, 00:24:10.037 { 00:24:10.037 "method": "nvmf_create_transport", 00:24:10.037 "params": { 00:24:10.037 "trtype": "TCP", 00:24:10.037 "max_queue_depth": 128, 00:24:10.037 "max_io_qpairs_per_ctrlr": 127, 00:24:10.037 "in_capsule_data_size": 4096, 00:24:10.037 "max_io_size": 131072, 00:24:10.037 "io_unit_size": 131072, 00:24:10.037 "max_aq_depth": 128, 00:24:10.037 "num_shared_buffers": 511, 00:24:10.037 "buf_cache_size": 4294967295, 00:24:10.037 "dif_insert_or_strip": false, 00:24:10.037 "zcopy": false, 00:24:10.037 "c2h_success": false, 00:24:10.037 "sock_priority": 0, 00:24:10.037 "abort_timeout_sec": 1, 00:24:10.037 "ack_timeout": 0, 00:24:10.038 "data_wr_pool_size": 0 00:24:10.038 } 00:24:10.038 }, 00:24:10.038 { 00:24:10.038 "method": "nvmf_create_subsystem", 00:24:10.038 "params": { 00:24:10.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.038 "allow_any_host": false, 00:24:10.038 "serial_number": "00000000000000000000", 00:24:10.038 "model_number": "SPDK bdev Controller", 00:24:10.038 "max_namespaces": 32, 00:24:10.038 "min_cntlid": 1, 00:24:10.038 "max_cntlid": 65519, 00:24:10.038 "ana_reporting": false 00:24:10.038 } 00:24:10.038 }, 00:24:10.038 { 00:24:10.038 "method": "nvmf_subsystem_add_host", 00:24:10.038 "params": { 00:24:10.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.038 "host": "nqn.2016-06.io.spdk:host1", 00:24:10.038 "psk": "key0" 00:24:10.038 } 00:24:10.038 }, 00:24:10.038 { 00:24:10.038 "method": "nvmf_subsystem_add_ns", 00:24:10.038 "params": { 00:24:10.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.038 "namespace": { 00:24:10.038 "nsid": 1, 00:24:10.038 "bdev_name": "malloc0", 00:24:10.038 "nguid": "92A6E9589BC74D1AB41B22D1904E43A9", 00:24:10.038 "uuid": "92a6e958-9bc7-4d1a-b41b-22d1904e43a9", 00:24:10.038 "no_auto_visible": false 00:24:10.038 } 00:24:10.038 } 00:24:10.038 }, 00:24:10.038 { 00:24:10.038 "method": "nvmf_subsystem_add_listener", 00:24:10.038 "params": { 00:24:10.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.038 "listen_address": { 00:24:10.038 "trtype": "TCP", 00:24:10.038 "adrfam": "IPv4", 00:24:10.038 "traddr": "10.0.0.2", 00:24:10.038 "trsvcid": "4420" 00:24:10.038 }, 00:24:10.038 "secure_channel": true 00:24:10.038 } 00:24:10.038 } 00:24:10.038 ] 00:24:10.038 } 00:24:10.038 ] 00:24:10.038 }' 00:24:10.038 06:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:10.297 06:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:10.297 "subsystems": [ 00:24:10.297 { 00:24:10.297 "subsystem": "keyring", 00:24:10.297 "config": [ 00:24:10.297 { 00:24:10.297 "method": "keyring_file_add_key", 00:24:10.297 "params": { 00:24:10.297 "name": "key0", 00:24:10.297 "path": "/tmp/tmp.ICBIamNi51" 00:24:10.297 } 00:24:10.297 } 00:24:10.297 ] 00:24:10.297 }, 00:24:10.297 { 00:24:10.297 "subsystem": "iobuf", 00:24:10.297 "config": [ 00:24:10.297 { 00:24:10.297 "method": "iobuf_set_options", 00:24:10.298 "params": { 00:24:10.298 "small_pool_count": 8192, 00:24:10.298 "large_pool_count": 1024, 00:24:10.298 "small_bufsize": 8192, 00:24:10.298 "large_bufsize": 135168 00:24:10.298 } 00:24:10.298 } 00:24:10.298 ] 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "subsystem": "sock", 00:24:10.298 "config": [ 00:24:10.298 { 00:24:10.298 "method": "sock_set_default_impl", 00:24:10.298 "params": { 00:24:10.298 "impl_name": "posix" 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "sock_impl_set_options", 00:24:10.298 "params": { 00:24:10.298 "impl_name": "ssl", 00:24:10.298 "recv_buf_size": 4096, 00:24:10.298 "send_buf_size": 4096, 00:24:10.298 "enable_recv_pipe": true, 00:24:10.298 "enable_quickack": false, 00:24:10.298 "enable_placement_id": 0, 00:24:10.298 "enable_zerocopy_send_server": true, 00:24:10.298 "enable_zerocopy_send_client": false, 00:24:10.298 "zerocopy_threshold": 0, 00:24:10.298 "tls_version": 0, 00:24:10.298 "enable_ktls": false 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "sock_impl_set_options", 00:24:10.298 "params": { 00:24:10.298 "impl_name": "posix", 00:24:10.298 "recv_buf_size": 2097152, 00:24:10.298 "send_buf_size": 2097152, 00:24:10.298 "enable_recv_pipe": true, 00:24:10.298 "enable_quickack": false, 00:24:10.298 "enable_placement_id": 0, 00:24:10.298 "enable_zerocopy_send_server": true, 00:24:10.298 "enable_zerocopy_send_client": false, 00:24:10.298 "zerocopy_threshold": 0, 00:24:10.298 "tls_version": 0, 00:24:10.298 "enable_ktls": false 00:24:10.298 } 00:24:10.298 } 00:24:10.298 ] 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "subsystem": "vmd", 00:24:10.298 "config": [] 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "subsystem": "accel", 00:24:10.298 "config": [ 00:24:10.298 { 00:24:10.298 "method": "accel_set_options", 00:24:10.298 "params": { 00:24:10.298 "small_cache_size": 128, 00:24:10.298 "large_cache_size": 16, 00:24:10.298 "task_count": 2048, 00:24:10.298 "sequence_count": 2048, 00:24:10.298 "buf_count": 2048 00:24:10.298 } 00:24:10.298 } 00:24:10.298 ] 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "subsystem": "bdev", 00:24:10.298 "config": [ 00:24:10.298 { 00:24:10.298 "method": "bdev_set_options", 00:24:10.298 "params": { 00:24:10.298 "bdev_io_pool_size": 65535, 00:24:10.298 "bdev_io_cache_size": 256, 00:24:10.298 "bdev_auto_examine": true, 00:24:10.298 "iobuf_small_cache_size": 128, 00:24:10.298 "iobuf_large_cache_size": 16 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_raid_set_options", 00:24:10.298 "params": { 00:24:10.298 "process_window_size_kb": 1024 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_iscsi_set_options", 00:24:10.298 "params": { 00:24:10.298 "timeout_sec": 30 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_nvme_set_options", 00:24:10.298 "params": { 00:24:10.298 "action_on_timeout": "none", 00:24:10.298 "timeout_us": 0, 00:24:10.298 "timeout_admin_us": 0, 00:24:10.298 "keep_alive_timeout_ms": 10000, 00:24:10.298 "arbitration_burst": 0, 00:24:10.298 "low_priority_weight": 0, 00:24:10.298 "medium_priority_weight": 0, 00:24:10.298 "high_priority_weight": 0, 00:24:10.298 "nvme_adminq_poll_period_us": 10000, 00:24:10.298 "nvme_ioq_poll_period_us": 0, 00:24:10.298 "io_queue_requests": 512, 00:24:10.298 "delay_cmd_submit": true, 00:24:10.298 "transport_retry_count": 4, 00:24:10.298 "bdev_retry_count": 3, 00:24:10.298 "transport_ack_timeout": 0, 00:24:10.298 "ctrlr_loss_timeout_sec": 0, 00:24:10.298 "reconnect_delay_sec": 0, 00:24:10.298 "fast_io_fail_timeout_sec": 0, 00:24:10.298 "disable_auto_failback": false, 00:24:10.298 "generate_uuids": false, 00:24:10.298 "transport_tos": 0, 00:24:10.298 "nvme_error_stat": false, 00:24:10.298 "rdma_srq_size": 0, 00:24:10.298 "io_path_stat": false, 00:24:10.298 "allow_accel_sequence": false, 00:24:10.298 "rdma_max_cq_size": 0, 00:24:10.298 "rdma_cm_event_timeout_ms": 0, 00:24:10.298 "dhchap_digests": [ 00:24:10.298 "sha256", 00:24:10.298 "sha384", 00:24:10.298 "sha512" 00:24:10.298 ], 00:24:10.298 "dhchap_dhgroups": [ 00:24:10.298 "null", 00:24:10.298 "ffdhe2048", 00:24:10.298 "ffdhe3072", 00:24:10.298 "ffdhe4096", 00:24:10.298 "ffdhe6144", 00:24:10.298 "ffdhe8192" 00:24:10.298 ] 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_nvme_attach_controller", 00:24:10.298 "params": { 00:24:10.298 "name": "nvme0", 00:24:10.298 "trtype": "TCP", 00:24:10.298 "adrfam": "IPv4", 00:24:10.298 "traddr": "10.0.0.2", 00:24:10.298 "trsvcid": "4420", 00:24:10.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.298 "prchk_reftag": false, 00:24:10.298 "prchk_guard": false, 00:24:10.298 "ctrlr_loss_timeout_sec": 0, 00:24:10.298 "reconnect_delay_sec": 0, 00:24:10.298 "fast_io_fail_timeout_sec": 0, 00:24:10.298 "psk": "key0", 00:24:10.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.298 "hdgst": false, 00:24:10.298 "ddgst": false 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_nvme_set_hotplug", 00:24:10.298 "params": { 00:24:10.298 "period_us": 100000, 00:24:10.298 "enable": false 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_enable_histogram", 00:24:10.298 "params": { 00:24:10.298 "name": "nvme0n1", 00:24:10.298 "enable": true 00:24:10.298 } 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "method": "bdev_wait_for_examine" 00:24:10.298 } 00:24:10.298 ] 00:24:10.298 }, 00:24:10.298 { 00:24:10.298 "subsystem": "nbd", 00:24:10.298 "config": [] 00:24:10.298 } 00:24:10.298 ] 00:24:10.298 }' 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 686103 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 686103 ']' 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 686103 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 686103 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 686103' 00:24:10.298 killing process with pid 686103 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 686103 00:24:10.298 Received shutdown signal, test time was about 1.000000 seconds 00:24:10.298 00:24:10.298 Latency(us) 00:24:10.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.298 =================================================================================================================== 00:24:10.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.298 06:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 686103 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 685958 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 685958 ']' 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 685958 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 685958 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 685958' 00:24:10.558 killing process with pid 685958 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 685958 00:24:10.558 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 685958 00:24:10.817 06:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:10.817 06:52:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.817 06:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:10.817 "subsystems": [ 00:24:10.817 { 00:24:10.817 "subsystem": "keyring", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "keyring_file_add_key", 00:24:10.817 "params": { 00:24:10.817 "name": "key0", 00:24:10.817 "path": "/tmp/tmp.ICBIamNi51" 00:24:10.817 } 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "iobuf", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "iobuf_set_options", 00:24:10.817 "params": { 00:24:10.817 "small_pool_count": 8192, 00:24:10.817 "large_pool_count": 1024, 00:24:10.817 "small_bufsize": 8192, 00:24:10.817 "large_bufsize": 135168 00:24:10.817 } 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "sock", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "sock_set_default_impl", 00:24:10.817 "params": { 00:24:10.817 "impl_name": "posix" 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "sock_impl_set_options", 00:24:10.817 "params": { 00:24:10.817 "impl_name": "ssl", 00:24:10.817 "recv_buf_size": 4096, 00:24:10.817 "send_buf_size": 4096, 00:24:10.817 "enable_recv_pipe": true, 00:24:10.817 "enable_quickack": false, 00:24:10.817 "enable_placement_id": 0, 00:24:10.817 "enable_zerocopy_send_server": true, 00:24:10.817 "enable_zerocopy_send_client": false, 00:24:10.817 "zerocopy_threshold": 0, 00:24:10.817 "tls_version": 0, 00:24:10.817 "enable_ktls": false 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "sock_impl_set_options", 00:24:10.817 "params": { 00:24:10.817 "impl_name": "posix", 00:24:10.817 "recv_buf_size": 2097152, 00:24:10.817 "send_buf_size": 2097152, 00:24:10.817 "enable_recv_pipe": true, 00:24:10.817 "enable_quickack": false, 00:24:10.817 "enable_placement_id": 0, 00:24:10.817 "enable_zerocopy_send_server": true, 00:24:10.817 "enable_zerocopy_send_client": false, 00:24:10.817 "zerocopy_threshold": 0, 00:24:10.817 "tls_version": 0, 00:24:10.817 "enable_ktls": false 00:24:10.817 } 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "vmd", 00:24:10.817 "config": [] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "accel", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "accel_set_options", 00:24:10.817 "params": { 00:24:10.817 "small_cache_size": 128, 00:24:10.817 "large_cache_size": 16, 00:24:10.817 "task_count": 2048, 00:24:10.817 "sequence_count": 2048, 00:24:10.817 "buf_count": 2048 00:24:10.817 } 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "bdev", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "bdev_set_options", 00:24:10.817 "params": { 00:24:10.817 "bdev_io_pool_size": 65535, 00:24:10.817 "bdev_io_cache_size": 256, 00:24:10.817 "bdev_auto_examine": true, 00:24:10.817 "iobuf_small_cache_size": 128, 00:24:10.817 "iobuf_large_cache_size": 16 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_raid_set_options", 00:24:10.817 "params": { 00:24:10.817 "process_window_size_kb": 1024 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_iscsi_set_options", 00:24:10.817 "params": { 00:24:10.817 "timeout_sec": 30 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_nvme_set_options", 00:24:10.817 "params": { 00:24:10.817 "action_on_timeout": "none", 00:24:10.817 "timeout_us": 0, 00:24:10.817 "timeout_admin_us": 0, 00:24:10.817 "keep_alive_timeout_ms": 10000, 00:24:10.817 "arbitration_burst": 0, 00:24:10.817 "low_priority_weight": 0, 00:24:10.817 "medium_priority_weight": 0, 00:24:10.817 "high_priority_weight": 0, 00:24:10.817 "nvme_adminq_poll_period_us": 10000, 00:24:10.817 "nvme_ioq_poll_period_us": 0, 00:24:10.817 "io_queue_requests": 0, 00:24:10.817 "delay_cmd_submit": true, 00:24:10.817 "transport_retry_count": 4, 00:24:10.817 "bdev_retry_count": 3, 00:24:10.817 "transport_ack_timeout": 0, 00:24:10.817 "ctrlr_loss_timeout_sec": 0, 00:24:10.817 "reconnect_delay_sec": 0, 00:24:10.817 "fast_io_fail_timeout_sec": 0, 00:24:10.817 "disable_auto_failback": false, 00:24:10.817 "generate_uuids": false, 00:24:10.817 "transport_tos": 0, 00:24:10.817 "nvme_error_stat": false, 00:24:10.817 "rdma_srq_size": 0, 00:24:10.817 "io_path_stat": false, 00:24:10.817 "allow_accel_sequence": false, 00:24:10.817 "rdma_max_cq_size": 0, 00:24:10.817 "rdma_cm_event_timeout_ms": 0, 00:24:10.817 "dhchap_digests": [ 00:24:10.817 "sha256", 00:24:10.817 "sha384", 00:24:10.817 "sha512" 00:24:10.817 ], 00:24:10.817 "dhchap_dhgroups": [ 00:24:10.817 "null", 00:24:10.817 "ffdhe2048", 00:24:10.817 "ffdhe3072", 00:24:10.817 "ffdhe4096", 00:24:10.817 "ffdhe6144", 00:24:10.817 "ffdhe8192" 00:24:10.817 ] 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_nvme_set_hotplug", 00:24:10.817 "params": { 00:24:10.817 "period_us": 100000, 00:24:10.817 "enable": false 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_malloc_create", 00:24:10.817 "params": { 00:24:10.817 "name": "malloc0", 00:24:10.817 "num_blocks": 8192, 00:24:10.817 "block_size": 4096, 00:24:10.817 "physical_block_size": 4096, 00:24:10.817 "uuid": "92a6e958-9bc7-4d1a-b41b-22d1904e43a9", 00:24:10.817 "optimal_io_boundary": 0 00:24:10.817 } 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "method": "bdev_wait_for_examine" 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "nbd", 00:24:10.817 "config": [] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "scheduler", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "framework_set_scheduler", 00:24:10.817 "params": { 00:24:10.817 "name": "static" 00:24:10.817 } 00:24:10.817 } 00:24:10.817 ] 00:24:10.817 }, 00:24:10.817 { 00:24:10.817 "subsystem": "nvmf", 00:24:10.817 "config": [ 00:24:10.817 { 00:24:10.817 "method": "nvmf_set_config", 00:24:10.817 "params": { 00:24:10.817 "discovery_filter": "match_any", 00:24:10.817 "admin_cmd_passthru": { 00:24:10.817 "identify_ctrlr": false 00:24:10.818 } 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_set_max_subsystems", 00:24:10.818 "params": { 00:24:10.818 "max_subsystems": 1024 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_set_crdt", 00:24:10.818 "params": { 00:24:10.818 "crdt1": 0, 00:24:10.818 "crdt2": 0, 00:24:10.818 "crdt3": 0 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_create_transport", 00:24:10.818 "params": { 00:24:10.818 "trtype": "TCP", 00:24:10.818 "max_queue_depth": 128, 00:24:10.818 "max_io_qpairs_per_ctrlr": 127, 00:24:10.818 "in_capsule_data_size": 4096, 00:24:10.818 "max_io_size": 131072, 00:24:10.818 "io_unit_size": 131072, 00:24:10.818 "max_aq_depth": 128, 00:24:10.818 "num_shared_buffers": 511, 00:24:10.818 "buf_cache_size": 4294967295, 00:24:10.818 "dif_insert_or_strip": false, 00:24:10.818 "zcopy": false, 00:24:10.818 "c2h_success": false, 00:24:10.818 "sock_priority": 0, 00:24:10.818 "abort_timeout_sec": 1, 00:24:10.818 "ack_timeout": 0, 00:24:10.818 "data_wr_pool_size": 0 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_create_subsystem", 00:24:10.818 "params": { 00:24:10.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.818 "allow_any_host": false, 00:24:10.818 "serial_number": "00000000000000000000", 00:24:10.818 "model_number": "SPDK bdev Controller", 00:24:10.818 "max_namespaces": 32, 00:24:10.818 "min_cntlid": 1, 00:24:10.818 "max_cntlid": 65519, 00:24:10.818 "ana_reporting": false 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_subsystem_add_host", 00:24:10.818 "params": { 00:24:10.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.818 "host": "nqn.2016-06.io.spdk:host1", 00:24:10.818 "psk": "key0" 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_subsystem_add_ns", 00:24:10.818 "params": { 00:24:10.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.818 "namespace": { 00:24:10.818 "nsid": 1, 00:24:10.818 "bdev_name": "malloc0", 00:24:10.818 "nguid": "92A6E9589BC74D1AB41B22D1904E43A9", 00:24:10.818 "uuid": "92a6e958-9bc7-4d1a-b41b-22d1904e43a9", 00:24:10.818 "no_auto_visible": false 00:24:10.818 } 00:24:10.818 } 00:24:10.818 }, 00:24:10.818 { 00:24:10.818 "method": "nvmf_subsystem_add_listener", 00:24:10.818 "params": { 00:24:10.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.818 "listen_address": { 00:24:10.818 "trtype": "TCP", 00:24:10.818 "adrfam": "IPv4", 00:24:10.818 "traddr": "10.0.0.2", 00:24:10.818 "trsvcid": "4420" 00:24:10.818 }, 00:24:10.818 "secure_channel": true 00:24:10.818 } 00:24:10.818 } 00:24:10.818 ] 00:24:10.818 } 00:24:10.818 ] 00:24:10.818 }' 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=686396 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 686396 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 686396 ']' 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.818 06:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.818 [2024-07-15 06:52:58.362291] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:10.818 [2024-07-15 06:52:58.362374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.818 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.076 [2024-07-15 06:52:58.432777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.076 [2024-07-15 06:52:58.523485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.076 [2024-07-15 06:52:58.523548] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.076 [2024-07-15 06:52:58.523574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.076 [2024-07-15 06:52:58.523587] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.076 [2024-07-15 06:52:58.523599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.076 [2024-07-15 06:52:58.523693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.334 [2024-07-15 06:52:58.763972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.334 [2024-07-15 06:52:58.795983] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.334 [2024-07-15 06:52:58.814074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=686549 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 686549 /var/tmp/bdevperf.sock 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 686549 ']' 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:11.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.900 06:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:11.900 "subsystems": [ 00:24:11.900 { 00:24:11.900 "subsystem": "keyring", 00:24:11.900 "config": [ 00:24:11.900 { 00:24:11.900 "method": "keyring_file_add_key", 00:24:11.900 "params": { 00:24:11.900 "name": "key0", 00:24:11.900 "path": "/tmp/tmp.ICBIamNi51" 00:24:11.900 } 00:24:11.900 } 00:24:11.900 ] 00:24:11.900 }, 00:24:11.900 { 00:24:11.900 "subsystem": "iobuf", 00:24:11.900 "config": [ 00:24:11.900 { 00:24:11.900 "method": "iobuf_set_options", 00:24:11.900 "params": { 00:24:11.900 "small_pool_count": 8192, 00:24:11.900 "large_pool_count": 1024, 00:24:11.900 "small_bufsize": 8192, 00:24:11.900 "large_bufsize": 135168 00:24:11.900 } 00:24:11.900 } 00:24:11.900 ] 00:24:11.900 }, 00:24:11.900 { 00:24:11.900 "subsystem": "sock", 00:24:11.900 "config": [ 00:24:11.900 { 00:24:11.900 "method": "sock_set_default_impl", 00:24:11.900 "params": { 00:24:11.900 "impl_name": "posix" 00:24:11.900 } 00:24:11.900 }, 00:24:11.900 { 00:24:11.900 "method": "sock_impl_set_options", 00:24:11.900 "params": { 00:24:11.900 "impl_name": "ssl", 00:24:11.900 "recv_buf_size": 4096, 00:24:11.900 "send_buf_size": 4096, 00:24:11.900 "enable_recv_pipe": true, 00:24:11.900 "enable_quickack": false, 00:24:11.900 "enable_placement_id": 0, 00:24:11.900 "enable_zerocopy_send_server": true, 00:24:11.900 "enable_zerocopy_send_client": false, 00:24:11.900 "zerocopy_threshold": 0, 00:24:11.900 "tls_version": 0, 00:24:11.900 "enable_ktls": false 00:24:11.900 } 00:24:11.900 }, 00:24:11.900 { 00:24:11.900 "method": "sock_impl_set_options", 00:24:11.900 "params": { 00:24:11.900 "impl_name": "posix", 00:24:11.900 "recv_buf_size": 2097152, 00:24:11.900 "send_buf_size": 2097152, 00:24:11.900 "enable_recv_pipe": true, 00:24:11.900 "enable_quickack": false, 00:24:11.900 "enable_placement_id": 0, 00:24:11.900 "enable_zerocopy_send_server": true, 00:24:11.900 "enable_zerocopy_send_client": false, 00:24:11.900 "zerocopy_threshold": 0, 00:24:11.900 "tls_version": 0, 00:24:11.900 "enable_ktls": false 00:24:11.900 } 00:24:11.900 } 00:24:11.900 ] 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "subsystem": "vmd", 00:24:11.901 "config": [] 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "subsystem": "accel", 00:24:11.901 "config": [ 00:24:11.901 { 00:24:11.901 "method": "accel_set_options", 00:24:11.901 "params": { 00:24:11.901 "small_cache_size": 128, 00:24:11.901 "large_cache_size": 16, 00:24:11.901 "task_count": 2048, 00:24:11.901 "sequence_count": 2048, 00:24:11.901 "buf_count": 2048 00:24:11.901 } 00:24:11.901 } 00:24:11.901 ] 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "subsystem": "bdev", 00:24:11.901 "config": [ 00:24:11.901 { 00:24:11.901 "method": "bdev_set_options", 00:24:11.901 "params": { 00:24:11.901 "bdev_io_pool_size": 65535, 00:24:11.901 "bdev_io_cache_size": 256, 00:24:11.901 "bdev_auto_examine": true, 00:24:11.901 "iobuf_small_cache_size": 128, 00:24:11.901 "iobuf_large_cache_size": 16 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_raid_set_options", 00:24:11.901 "params": { 00:24:11.901 "process_window_size_kb": 1024 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_iscsi_set_options", 00:24:11.901 "params": { 00:24:11.901 "timeout_sec": 30 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_nvme_set_options", 00:24:11.901 "params": { 00:24:11.901 "action_on_timeout": "none", 00:24:11.901 "timeout_us": 0, 00:24:11.901 "timeout_admin_us": 0, 00:24:11.901 "keep_alive_timeout_ms": 10000, 00:24:11.901 "arbitration_burst": 0, 00:24:11.901 "low_priority_weight": 0, 00:24:11.901 "medium_priority_weight": 0, 00:24:11.901 "high_priority_weight": 0, 00:24:11.901 "nvme_adminq_poll_period_us": 10000, 00:24:11.901 "nvme_ioq_poll_period_us": 0, 00:24:11.901 "io_queue_requests": 512, 00:24:11.901 "delay_cmd_submit": true, 00:24:11.901 "transport_retry_count": 4, 00:24:11.901 "bdev_retry_count": 3, 00:24:11.901 "transport_ack_timeout": 0, 00:24:11.901 "ctrlr_loss_timeout_sec": 0, 00:24:11.901 "reconnect_delay_sec": 0, 00:24:11.901 "fast_io_fail_timeout_sec": 0, 00:24:11.901 "disable_auto_failback": false, 00:24:11.901 "generate_uuids": false, 00:24:11.901 "transport_tos": 0, 00:24:11.901 "nvme_error_stat": false, 00:24:11.901 "rdma_srq_size": 0, 00:24:11.901 "io_path_stat": false, 00:24:11.901 "allow_accel_sequence": false, 00:24:11.901 "rdma_max_cq_size": 0, 00:24:11.901 "rdma_cm_event_timeout_ms": 0, 00:24:11.901 "dhchap_digests": [ 00:24:11.901 "sha256", 00:24:11.901 "sha384", 00:24:11.901 "sha512" 00:24:11.901 ], 00:24:11.901 "dhchap_dhgroups": [ 00:24:11.901 "null", 00:24:11.901 "ffdhe2048", 00:24:11.901 "ffdhe3072", 00:24:11.901 "ffdhe4096", 00:24:11.901 "ffdhe6144", 00:24:11.901 "ffdhe8192" 00:24:11.901 ] 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_nvme_attach_controller", 00:24:11.901 "params": { 00:24:11.901 "name": "nvme0", 00:24:11.901 "trtype": "TCP", 00:24:11.901 "adrfam": "IPv4", 00:24:11.901 "traddr": "10.0.0.2", 00:24:11.901 "trsvcid": "4420", 00:24:11.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.901 "prchk_reftag": false, 00:24:11.901 "prchk_guard": false, 00:24:11.901 "ctrlr_loss_timeout_sec": 0, 00:24:11.901 "reconnect_delay_sec": 0, 00:24:11.901 "fast_io_fail_timeout_sec": 0, 00:24:11.901 "psk": "key0", 00:24:11.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.901 "hdgst": false, 00:24:11.901 "ddgst": false 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_nvme_set_hotplug", 00:24:11.901 "params": { 00:24:11.901 "period_us": 100000, 00:24:11.901 "enable": false 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_enable_histogram", 00:24:11.901 "params": { 00:24:11.901 "name": "nvme0n1", 00:24:11.901 "enable": true 00:24:11.901 } 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "method": "bdev_wait_for_examine" 00:24:11.901 } 00:24:11.901 ] 00:24:11.901 }, 00:24:11.901 { 00:24:11.901 "subsystem": "nbd", 00:24:11.901 "config": [] 00:24:11.901 } 00:24:11.901 ] 00:24:11.901 }' 00:24:11.901 [2024-07-15 06:52:59.360685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:11.901 [2024-07-15 06:52:59.360772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686549 ] 00:24:11.901 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.901 [2024-07-15 06:52:59.426103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.161 [2024-07-15 06:52:59.519673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.161 [2024-07-15 06:52:59.701315] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.730 06:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:12.730 06:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:12.988 06:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:12.988 06:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.247 06:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.247 06:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.247 Running I/O for 1 seconds... 00:24:14.181 00:24:14.181 Latency(us) 00:24:14.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.181 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:14.181 Verification LBA range: start 0x0 length 0x2000 00:24:14.181 nvme0n1 : 1.03 2583.58 10.09 0.00 0.00 48810.59 6699.24 59419.31 00:24:14.182 =================================================================================================================== 00:24:14.182 Total : 2583.58 10.09 0.00 0.00 48810.59 6699.24 59419.31 00:24:14.182 0 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:14.182 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:14.182 nvmf_trace.0 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 686549 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 686549 ']' 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 686549 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 686549 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 686549' 00:24:14.440 killing process with pid 686549 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 686549 00:24:14.440 Received shutdown signal, test time was about 1.000000 seconds 00:24:14.440 00:24:14.440 Latency(us) 00:24:14.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.440 =================================================================================================================== 00:24:14.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.440 06:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 686549 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.714 rmmod nvme_tcp 00:24:14.714 rmmod nvme_fabrics 00:24:14.714 rmmod nvme_keyring 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 686396 ']' 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 686396 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 686396 ']' 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 686396 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 686396 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 686396' 00:24:14.714 killing process with pid 686396 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 686396 00:24:14.714 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 686396 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.983 06:53:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.886 06:53:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.886 06:53:04 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2Ic3pEqkJZ /tmp/tmp.utI3ENk7Dz /tmp/tmp.ICBIamNi51 00:24:16.887 00:24:16.887 real 1m18.373s 00:24:16.887 user 2m6.757s 00:24:16.887 sys 0m25.718s 00:24:16.887 06:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:16.887 06:53:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.887 ************************************ 00:24:16.887 END TEST nvmf_tls 00:24:16.887 ************************************ 00:24:16.887 06:53:04 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:16.887 06:53:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:16.887 06:53:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:16.887 06:53:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 ************************************ 00:24:17.145 START TEST nvmf_fips 00:24:17.145 ************************************ 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:17.145 * Looking for test storage... 00:24:17.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.145 06:53:04 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:17.146 Error setting digest 00:24:17.146 0072837F1E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:17.146 0072837F1E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.146 06:53:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:19.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:19.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:19.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:19.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.681 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:24:19.682 00:24:19.682 --- 10.0.0.2 ping statistics --- 00:24:19.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.682 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:19.682 00:24:19.682 --- 10.0.0.1 ping statistics --- 00:24:19.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.682 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=688903 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 688903 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 688903 ']' 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.682 06:53:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.682 [2024-07-15 06:53:06.955370] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:19.682 [2024-07-15 06:53:06.955450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.682 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.682 [2024-07-15 06:53:07.024220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.682 [2024-07-15 06:53:07.116921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.682 [2024-07-15 06:53:07.116990] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.682 [2024-07-15 06:53:07.117015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.682 [2024-07-15 06:53:07.117029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.682 [2024-07-15 06:53:07.117042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.682 [2024-07-15 06:53:07.117082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.682 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.941 [2024-07-15 06:53:07.533303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.941 [2024-07-15 06:53:07.549309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.941 [2024-07-15 06:53:07.549566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.200 [2024-07-15 06:53:07.581818] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:20.200 malloc0 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=689051 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 689051 /var/tmp/bdevperf.sock 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 689051 ']' 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:20.200 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:20.200 [2024-07-15 06:53:07.676155] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:20.200 [2024-07-15 06:53:07.676249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689051 ] 00:24:20.200 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.200 [2024-07-15 06:53:07.736522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.458 [2024-07-15 06:53:07.824073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.458 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:20.458 06:53:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:20.458 06:53:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:20.716 [2024-07-15 06:53:08.199511] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.716 [2024-07-15 06:53:08.199636] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:20.716 TLSTESTn1 00:24:20.716 06:53:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.974 Running I/O for 10 seconds... 00:24:30.960 00:24:30.960 Latency(us) 00:24:30.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.960 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.960 Verification LBA range: start 0x0 length 0x2000 00:24:30.960 TLSTESTn1 : 10.06 2383.50 9.31 0.00 0.00 53566.11 6262.33 59030.95 00:24:30.960 =================================================================================================================== 00:24:30.960 Total : 2383.50 9.31 0.00 0.00 53566.11 6262.33 59030.95 00:24:30.960 0 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:30.960 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:30.960 nvmf_trace.0 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 689051 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 689051 ']' 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 689051 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 689051 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 689051' 00:24:31.219 killing process with pid 689051 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 689051 00:24:31.219 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.219 00:24:31.219 Latency(us) 00:24:31.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.219 =================================================================================================================== 00:24:31.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.219 [2024-07-15 06:53:18.604145] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 689051 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.219 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.478 rmmod nvme_tcp 00:24:31.478 rmmod nvme_fabrics 00:24:31.478 rmmod nvme_keyring 00:24:31.478 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.478 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:31.478 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:31.478 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 688903 ']' 00:24:31.478 06:53:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 688903 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 688903 ']' 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 688903 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 688903 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 688903' 00:24:31.479 killing process with pid 688903 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 688903 00:24:31.479 [2024-07-15 06:53:18.937569] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:31.479 06:53:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 688903 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.737 06:53:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.642 06:53:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.642 06:53:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:33.642 00:24:33.642 real 0m16.701s 00:24:33.642 user 0m20.657s 00:24:33.642 sys 0m6.511s 00:24:33.642 06:53:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.642 06:53:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.642 ************************************ 00:24:33.642 END TEST nvmf_fips 00:24:33.642 ************************************ 00:24:33.642 06:53:21 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:33.642 06:53:21 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:33.642 06:53:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:33.642 06:53:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:33.642 06:53:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.642 ************************************ 00:24:33.642 START TEST nvmf_fuzz 00:24:33.642 ************************************ 00:24:33.642 06:53:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:33.900 * Looking for test storage... 00:24:33.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.900 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.901 06:53:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.807 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:35.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.808 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:35.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:35.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.809 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:35.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.810 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:24:35.811 00:24:35.811 --- 10.0.0.2 ping statistics --- 00:24:35.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.811 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:35.811 00:24:35.811 --- 10.0.0.1 ping statistics --- 00:24:35.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.811 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.811 06:53:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=692217 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 692217 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 692217 ']' 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:36.073 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 Malloc0 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:36.341 06:53:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:08.458 Fuzzing completed. Shutting down the fuzz application 00:25:08.458 00:25:08.458 Dumping successful admin opcodes: 00:25:08.458 8, 9, 10, 24, 00:25:08.458 Dumping successful io opcodes: 00:25:08.458 0, 9, 00:25:08.458 NS: 0x200003aeff00 I/O qp, Total commands completed: 458953, total successful commands: 2660, random_seed: 581945984 00:25:08.458 NS: 0x200003aeff00 admin qp, Total commands completed: 56160, total successful commands: 447, random_seed: 1878343744 00:25:08.458 06:53:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:08.458 Fuzzing completed. Shutting down the fuzz application 00:25:08.458 00:25:08.458 Dumping successful admin opcodes: 00:25:08.458 24, 00:25:08.458 Dumping successful io opcodes: 00:25:08.458 00:25:08.458 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4238688007 00:25:08.458 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4238792932 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.458 rmmod nvme_tcp 00:25:08.458 rmmod nvme_fabrics 00:25:08.458 rmmod nvme_keyring 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 692217 ']' 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 692217 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 692217 ']' 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 692217 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 692217 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:08.458 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 692217' 00:25:08.458 killing process with pid 692217 00:25:08.459 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 692217 00:25:08.459 06:53:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 692217 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.719 06:53:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.631 06:53:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.631 06:53:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:10.631 00:25:10.631 real 0m36.904s 00:25:10.631 user 0m50.905s 00:25:10.631 sys 0m15.639s 00:25:10.631 06:53:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.631 06:53:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:10.631 ************************************ 00:25:10.631 END TEST nvmf_fuzz 00:25:10.631 ************************************ 00:25:10.631 06:53:58 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:10.631 06:53:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.631 06:53:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.631 06:53:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.631 ************************************ 00:25:10.631 START TEST nvmf_multiconnection 00:25:10.631 ************************************ 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:10.631 * Looking for test storage... 00:25:10.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.631 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.890 06:53:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.794 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:12.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:25:12.795 00:25:12.795 --- 10.0.0.2 ping statistics --- 00:25:12.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.795 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:12.795 00:25:12.795 --- 10.0.0.1 ping statistics --- 00:25:12.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.795 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=697927 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 697927 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 697927 ']' 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:12.795 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 [2024-07-15 06:54:00.383809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:12.795 [2024-07-15 06:54:00.383914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.054 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.054 [2024-07-15 06:54:00.459898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.054 [2024-07-15 06:54:00.556021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.054 [2024-07-15 06:54:00.556078] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.054 [2024-07-15 06:54:00.556092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.054 [2024-07-15 06:54:00.556103] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.054 [2024-07-15 06:54:00.556113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.054 [2024-07-15 06:54:00.558902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.054 [2024-07-15 06:54:00.558957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.054 [2024-07-15 06:54:00.559044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.054 [2024-07-15 06:54:00.559041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 [2024-07-15 06:54:00.714527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 Malloc1 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 [2024-07-15 06:54:00.769650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 Malloc2 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 Malloc3 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.316 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.316 Malloc4 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.317 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc5 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc6 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc7 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc8 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc9 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.576 Malloc10 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.576 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 Malloc11 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.833 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:14.398 06:54:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:14.398 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:14.398 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.398 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:14.398 06:54:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:16.299 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:16.299 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:16.299 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:16.300 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:16.300 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.300 06:54:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:16.300 06:54:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.300 06:54:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:17.233 06:54:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:17.233 06:54:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:17.233 06:54:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.233 06:54:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:17.233 06:54:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.132 06:54:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:20.065 06:54:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:20.065 06:54:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:20.065 06:54:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.065 06:54:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:20.065 06:54:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.995 06:54:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:22.562 06:54:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:22.562 06:54:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:22.562 06:54:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.562 06:54:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:22.562 06:54:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.467 06:54:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:25.405 06:54:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:25.405 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:25.405 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.405 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:25.405 06:54:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.303 06:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:28.239 06:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:28.239 06:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:28.239 06:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.239 06:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:28.239 06:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.142 06:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:31.078 06:54:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:31.078 06:54:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:31.078 06:54:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.078 06:54:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:31.078 06:54:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.982 06:54:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:33.919 06:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:33.919 06:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:33.919 06:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.919 06:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:33.919 06:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.821 06:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:36.751 06:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:36.751 06:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:36.751 06:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.751 06:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:36.751 06:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.288 06:54:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:39.548 06:54:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:39.548 06:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:39.548 06:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.548 06:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:39.548 06:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.078 06:54:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:42.646 06:54:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:42.646 06:54:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:42.646 06:54:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.646 06:54:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:42.646 06:54:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:44.548 06:54:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:44.548 [global] 00:25:44.548 thread=1 00:25:44.548 invalidate=1 00:25:44.548 rw=read 00:25:44.548 time_based=1 00:25:44.548 runtime=10 00:25:44.548 ioengine=libaio 00:25:44.548 direct=1 00:25:44.548 bs=262144 00:25:44.548 iodepth=64 00:25:44.548 norandommap=1 00:25:44.548 numjobs=1 00:25:44.548 00:25:44.548 [job0] 00:25:44.548 filename=/dev/nvme0n1 00:25:44.548 [job1] 00:25:44.548 filename=/dev/nvme10n1 00:25:44.548 [job2] 00:25:44.548 filename=/dev/nvme1n1 00:25:44.548 [job3] 00:25:44.548 filename=/dev/nvme2n1 00:25:44.548 [job4] 00:25:44.548 filename=/dev/nvme3n1 00:25:44.548 [job5] 00:25:44.548 filename=/dev/nvme4n1 00:25:44.548 [job6] 00:25:44.548 filename=/dev/nvme5n1 00:25:44.548 [job7] 00:25:44.548 filename=/dev/nvme6n1 00:25:44.548 [job8] 00:25:44.548 filename=/dev/nvme7n1 00:25:44.548 [job9] 00:25:44.548 filename=/dev/nvme8n1 00:25:44.548 [job10] 00:25:44.548 filename=/dev/nvme9n1 00:25:44.812 Could not set queue depth (nvme0n1) 00:25:44.812 Could not set queue depth (nvme10n1) 00:25:44.812 Could not set queue depth (nvme1n1) 00:25:44.812 Could not set queue depth (nvme2n1) 00:25:44.812 Could not set queue depth (nvme3n1) 00:25:44.812 Could not set queue depth (nvme4n1) 00:25:44.812 Could not set queue depth (nvme5n1) 00:25:44.812 Could not set queue depth (nvme6n1) 00:25:44.812 Could not set queue depth (nvme7n1) 00:25:44.812 Could not set queue depth (nvme8n1) 00:25:44.812 Could not set queue depth (nvme9n1) 00:25:45.069 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.069 fio-3.35 00:25:45.069 Starting 11 threads 00:25:57.271 00:25:57.271 job0: (groupid=0, jobs=1): err= 0: pid=702789: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=500, BW=125MiB/s (131MB/s)(1262MiB/10094msec) 00:25:57.271 slat (usec): min=12, max=58149, avg=1836.21, stdev=5288.88 00:25:57.271 clat (usec): min=1520, max=269789, avg=126037.78, stdev=50462.93 00:25:57.271 lat (usec): min=1541, max=275505, avg=127873.99, stdev=51333.34 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 56], 20.00th=[ 96], 00:25:57.271 | 30.00th=[ 109], 40.00th=[ 118], 50.00th=[ 127], 60.00th=[ 138], 00:25:57.271 | 70.00th=[ 150], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 209], 00:25:57.271 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 266], 99.95th=[ 271], 00:25:57.271 | 99.99th=[ 271] 00:25:57.271 bw ( KiB/s): min=77312, max=221696, per=7.02%, avg=127570.70, stdev=40868.00, samples=20 00:25:57.271 iops : min= 302, max= 866, avg=498.25, stdev=159.53, samples=20 00:25:57.271 lat (msec) : 2=0.02%, 4=0.42%, 10=2.73%, 20=1.88%, 50=4.06% 00:25:57.271 lat (msec) : 100=13.43%, 250=76.86%, 500=0.59% 00:25:57.271 cpu : usr=0.30%, sys=1.70%, ctx=1133, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job1: (groupid=0, jobs=1): err= 0: pid=702790: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=537, BW=134MiB/s (141MB/s)(1357MiB/10096msec) 00:25:57.271 slat (usec): min=9, max=63809, avg=1421.90, stdev=5021.70 00:25:57.271 clat (usec): min=1809, max=258331, avg=117537.69, stdev=51436.46 00:25:57.271 lat (usec): min=1857, max=282584, avg=118959.59, stdev=52239.42 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 52], 20.00th=[ 71], 00:25:57.271 | 30.00th=[ 89], 40.00th=[ 110], 50.00th=[ 123], 60.00th=[ 136], 00:25:57.271 | 70.00th=[ 148], 80.00th=[ 161], 90.00th=[ 180], 95.00th=[ 201], 00:25:57.271 | 99.00th=[ 222], 99.50th=[ 232], 99.90th=[ 247], 99.95th=[ 255], 00:25:57.271 | 99.99th=[ 259] 00:25:57.271 bw ( KiB/s): min=88576, max=208384, per=7.55%, avg=137267.40, stdev=37779.91, samples=20 00:25:57.271 iops : min= 346, max= 814, avg=536.15, stdev=147.57, samples=20 00:25:57.271 lat (msec) : 2=0.04%, 4=0.22%, 10=2.43%, 20=2.99%, 50=4.07% 00:25:57.271 lat (msec) : 100=24.67%, 250=65.51%, 500=0.07% 00:25:57.271 cpu : usr=0.26%, sys=1.83%, ctx=1335, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job2: (groupid=0, jobs=1): err= 0: pid=702793: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=496, BW=124MiB/s (130MB/s)(1252MiB/10088msec) 00:25:57.271 slat (usec): min=9, max=123175, avg=1537.60, stdev=6224.29 00:25:57.271 clat (msec): min=2, max=329, avg=127.32, stdev=53.92 00:25:57.271 lat (msec): min=2, max=335, avg=128.86, stdev=54.79 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 52], 20.00th=[ 77], 00:25:57.271 | 30.00th=[ 107], 40.00th=[ 121], 50.00th=[ 131], 60.00th=[ 142], 00:25:57.271 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 192], 95.00th=[ 213], 00:25:57.271 | 99.00th=[ 257], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 321], 00:25:57.271 | 99.99th=[ 330] 00:25:57.271 bw ( KiB/s): min=70656, max=224768, per=6.96%, avg=126524.00, stdev=38899.04, samples=20 00:25:57.271 iops : min= 276, max= 878, avg=494.10, stdev=151.93, samples=20 00:25:57.271 lat (msec) : 4=0.08%, 10=0.58%, 20=2.32%, 50=6.77%, 100=17.12% 00:25:57.271 lat (msec) : 250=72.00%, 500=1.14% 00:25:57.271 cpu : usr=0.25%, sys=1.74%, ctx=1172, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job3: (groupid=0, jobs=1): err= 0: pid=702794: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=588, BW=147MiB/s (154MB/s)(1474MiB/10014msec) 00:25:57.271 slat (usec): min=9, max=110246, avg=1206.17, stdev=4598.43 00:25:57.271 clat (usec): min=1746, max=273937, avg=107404.57, stdev=53133.64 00:25:57.271 lat (usec): min=1768, max=343940, avg=108610.73, stdev=53709.92 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 64], 00:25:57.271 | 30.00th=[ 78], 40.00th=[ 91], 50.00th=[ 104], 60.00th=[ 120], 00:25:57.271 | 70.00th=[ 133], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 203], 00:25:57.271 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 271], 00:25:57.271 | 99.99th=[ 275] 00:25:57.271 bw ( KiB/s): min=87040, max=236032, per=8.22%, avg=149306.50, stdev=41797.67, samples=20 00:25:57.271 iops : min= 340, max= 922, avg=583.20, stdev=163.22, samples=20 00:25:57.271 lat (msec) : 2=0.03%, 4=0.46%, 10=1.02%, 20=3.44%, 50=7.83% 00:25:57.271 lat (msec) : 100=34.48%, 250=52.25%, 500=0.49% 00:25:57.271 cpu : usr=0.32%, sys=1.76%, ctx=1368, majf=0, minf=3721 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job4: (groupid=0, jobs=1): err= 0: pid=702795: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=752, BW=188MiB/s (197MB/s)(1893MiB/10057msec) 00:25:57.271 slat (usec): min=9, max=108183, avg=684.80, stdev=3920.71 00:25:57.271 clat (usec): min=757, max=304427, avg=84278.12, stdev=56377.52 00:25:57.271 lat (usec): min=777, max=304446, avg=84962.93, stdev=56873.14 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 29], 00:25:57.271 | 30.00th=[ 45], 40.00th=[ 63], 50.00th=[ 80], 60.00th=[ 93], 00:25:57.271 | 70.00th=[ 109], 80.00th=[ 131], 90.00th=[ 165], 95.00th=[ 197], 00:25:57.271 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 253], 99.95th=[ 262], 00:25:57.271 | 99.99th=[ 305] 00:25:57.271 bw ( KiB/s): min=91136, max=376320, per=10.57%, avg=192164.10, stdev=82850.79, samples=20 00:25:57.271 iops : min= 356, max= 1470, avg=750.55, stdev=323.68, samples=20 00:25:57.271 lat (usec) : 1000=0.09% 00:25:57.271 lat (msec) : 2=0.18%, 4=0.85%, 10=3.71%, 20=9.00%, 50=19.41% 00:25:57.271 lat (msec) : 100=31.06%, 250=35.60%, 500=0.11% 00:25:57.271 cpu : usr=0.26%, sys=1.82%, ctx=1734, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=7570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job5: (groupid=0, jobs=1): err= 0: pid=702797: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=564, BW=141MiB/s (148MB/s)(1424MiB/10091msec) 00:25:57.271 slat (usec): min=14, max=98003, avg=1681.47, stdev=5311.37 00:25:57.271 clat (msec): min=2, max=277, avg=111.60, stdev=61.50 00:25:57.271 lat (msec): min=2, max=277, avg=113.28, stdev=62.48 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 34], 20.00th=[ 39], 00:25:57.271 | 30.00th=[ 56], 40.00th=[ 109], 50.00th=[ 124], 60.00th=[ 134], 00:25:57.271 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 186], 95.00th=[ 213], 00:25:57.271 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 275], 99.95th=[ 275], 00:25:57.271 | 99.99th=[ 279] 00:25:57.271 bw ( KiB/s): min=71168, max=413892, per=7.93%, avg=144126.95, stdev=91186.51, samples=20 00:25:57.271 iops : min= 278, max= 1616, avg=562.95, stdev=356.06, samples=20 00:25:57.271 lat (msec) : 4=0.60%, 10=1.93%, 20=2.49%, 50=23.16%, 100=7.50% 00:25:57.271 lat (msec) : 250=63.85%, 500=0.47% 00:25:57.271 cpu : usr=0.39%, sys=1.93%, ctx=1257, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job6: (groupid=0, jobs=1): err= 0: pid=702799: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=560, BW=140MiB/s (147MB/s)(1415MiB/10092msec) 00:25:57.271 slat (usec): min=9, max=78966, avg=1083.01, stdev=4552.46 00:25:57.271 clat (msec): min=2, max=293, avg=112.96, stdev=57.94 00:25:57.271 lat (msec): min=2, max=293, avg=114.04, stdev=58.53 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 27], 20.00th=[ 59], 00:25:57.271 | 30.00th=[ 89], 40.00th=[ 108], 50.00th=[ 120], 60.00th=[ 129], 00:25:57.271 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 188], 95.00th=[ 213], 00:25:57.271 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 279], 00:25:57.271 | 99.99th=[ 292] 00:25:57.271 bw ( KiB/s): min=75776, max=239104, per=7.88%, avg=143227.45, stdev=39736.56, samples=20 00:25:57.271 iops : min= 296, max= 934, avg=559.40, stdev=155.21, samples=20 00:25:57.271 lat (msec) : 4=1.06%, 10=3.15%, 20=2.79%, 50=11.47%, 100=17.21% 00:25:57.271 lat (msec) : 250=63.83%, 500=0.49% 00:25:57.271 cpu : usr=0.20%, sys=1.62%, ctx=1465, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=5659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job7: (groupid=0, jobs=1): err= 0: pid=702800: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=895, BW=224MiB/s (235MB/s)(2259MiB/10093msec) 00:25:57.271 slat (usec): min=12, max=85838, avg=1030.70, stdev=3643.09 00:25:57.271 clat (msec): min=3, max=258, avg=70.39, stdev=44.70 00:25:57.271 lat (msec): min=3, max=310, avg=71.42, stdev=45.37 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 32], 00:25:57.271 | 30.00th=[ 37], 40.00th=[ 49], 50.00th=[ 60], 60.00th=[ 71], 00:25:57.271 | 70.00th=[ 86], 80.00th=[ 103], 90.00th=[ 140], 95.00th=[ 165], 00:25:57.271 | 99.00th=[ 205], 99.50th=[ 226], 99.90th=[ 249], 99.95th=[ 253], 00:25:57.271 | 99.99th=[ 259] 00:25:57.271 bw ( KiB/s): min=97792, max=480256, per=12.64%, avg=229747.80, stdev=117857.16, samples=20 00:25:57.271 iops : min= 382, max= 1876, avg=897.40, stdev=460.34, samples=20 00:25:57.271 lat (msec) : 4=0.01%, 10=0.80%, 20=2.95%, 50=37.53%, 100=37.84% 00:25:57.271 lat (msec) : 250=20.80%, 500=0.06% 00:25:57.271 cpu : usr=0.54%, sys=2.52%, ctx=1663, majf=0, minf=4097 00:25:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.271 issued rwts: total=9037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.271 job8: (groupid=0, jobs=1): err= 0: pid=702802: Mon Jul 15 06:54:42 2024 00:25:57.271 read: IOPS=546, BW=137MiB/s (143MB/s)(1376MiB/10075msec) 00:25:57.271 slat (usec): min=9, max=63516, avg=1248.14, stdev=4240.62 00:25:57.271 clat (usec): min=1049, max=260312, avg=115850.42, stdev=52998.61 00:25:57.271 lat (usec): min=1072, max=260327, avg=117098.56, stdev=53652.04 00:25:57.271 clat percentiles (msec): 00:25:57.271 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 66], 00:25:57.271 | 30.00th=[ 94], 40.00th=[ 110], 50.00th=[ 124], 60.00th=[ 136], 00:25:57.271 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 199], 00:25:57.271 | 99.00th=[ 226], 99.50th=[ 232], 99.90th=[ 249], 99.95th=[ 251], 00:25:57.271 | 99.99th=[ 262] 00:25:57.272 bw ( KiB/s): min=69120, max=330240, per=7.66%, avg=139212.55, stdev=62905.60, samples=20 00:25:57.272 iops : min= 270, max= 1290, avg=543.75, stdev=245.73, samples=20 00:25:57.272 lat (msec) : 2=0.13%, 4=0.69%, 10=1.45%, 20=1.94%, 50=12.30% 00:25:57.272 lat (msec) : 100=17.19%, 250=66.23%, 500=0.05% 00:25:57.272 cpu : usr=0.30%, sys=1.55%, ctx=1357, majf=0, minf=4097 00:25:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.272 issued rwts: total=5502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.272 job9: (groupid=0, jobs=1): err= 0: pid=702803: Mon Jul 15 06:54:42 2024 00:25:57.272 read: IOPS=714, BW=179MiB/s (187MB/s)(1797MiB/10058msec) 00:25:57.272 slat (usec): min=12, max=113860, avg=1265.84, stdev=4604.97 00:25:57.272 clat (usec): min=1486, max=367970, avg=88244.21, stdev=50755.05 00:25:57.272 lat (usec): min=1523, max=367988, avg=89510.05, stdev=51417.92 00:25:57.272 clat percentiles (msec): 00:25:57.272 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 50], 00:25:57.272 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 78], 60.00th=[ 88], 00:25:57.272 | 70.00th=[ 99], 80.00th=[ 116], 90.00th=[ 165], 95.00th=[ 205], 00:25:57.272 | 99.00th=[ 255], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 334], 00:25:57.272 | 99.99th=[ 368] 00:25:57.272 bw ( KiB/s): min=72704, max=384000, per=10.03%, avg=182300.50, stdev=80613.94, samples=20 00:25:57.272 iops : min= 284, max= 1500, avg=712.05, stdev=314.84, samples=20 00:25:57.272 lat (msec) : 2=0.07%, 4=0.14%, 10=1.03%, 20=0.51%, 50=18.40% 00:25:57.272 lat (msec) : 100=51.27%, 250=27.44%, 500=1.14% 00:25:57.272 cpu : usr=0.51%, sys=2.15%, ctx=1437, majf=0, minf=4097 00:25:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.272 issued rwts: total=7186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.272 job10: (groupid=0, jobs=1): err= 0: pid=702804: Mon Jul 15 06:54:42 2024 00:25:57.272 read: IOPS=954, BW=239MiB/s (250MB/s)(2409MiB/10095msec) 00:25:57.272 slat (usec): min=9, max=90560, avg=787.90, stdev=2694.84 00:25:57.272 clat (msec): min=6, max=247, avg=66.21, stdev=39.33 00:25:57.272 lat (msec): min=6, max=281, avg=66.99, stdev=39.56 00:25:57.272 clat percentiles (msec): 00:25:57.272 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 35], 00:25:57.272 | 30.00th=[ 39], 40.00th=[ 45], 50.00th=[ 53], 60.00th=[ 62], 00:25:57.272 | 70.00th=[ 74], 80.00th=[ 100], 90.00th=[ 125], 95.00th=[ 153], 00:25:57.272 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 226], 99.95th=[ 232], 00:25:57.272 | 99.99th=[ 249] 00:25:57.272 bw ( KiB/s): min=112640, max=494080, per=13.48%, avg=245044.05, stdev=117728.76, samples=20 00:25:57.272 iops : min= 440, max= 1930, avg=957.15, stdev=459.93, samples=20 00:25:57.272 lat (msec) : 10=0.05%, 20=0.11%, 50=47.17%, 100=32.78%, 250=19.88% 00:25:57.272 cpu : usr=0.41%, sys=2.91%, ctx=1831, majf=0, minf=4097 00:25:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.272 issued rwts: total=9636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.272 00:25:57.272 Run status group 0 (all jobs): 00:25:57.272 READ: bw=1775MiB/s (1861MB/s), 124MiB/s-239MiB/s (130MB/s-250MB/s), io=17.5GiB (18.8GB), run=10014-10096msec 00:25:57.272 00:25:57.272 Disk stats (read/write): 00:25:57.272 nvme0n1: ios=9723/0, merge=0/0, ticks=1231653/0, in_queue=1231653, util=96.93% 00:25:57.272 nvme10n1: ios=10599/0, merge=0/0, ticks=1235034/0, in_queue=1235034, util=97.16% 00:25:57.272 nvme1n1: ios=9795/0, merge=0/0, ticks=1229458/0, in_queue=1229458, util=97.45% 00:25:57.272 nvme2n1: ios=11396/0, merge=0/0, ticks=1235130/0, in_queue=1235130, util=97.63% 00:25:57.272 nvme3n1: ios=14914/0, merge=0/0, ticks=1240246/0, in_queue=1240246, util=97.69% 00:25:57.272 nvme4n1: ios=11180/0, merge=0/0, ticks=1222741/0, in_queue=1222741, util=98.07% 00:25:57.272 nvme5n1: ios=11108/0, merge=0/0, ticks=1236218/0, in_queue=1236218, util=98.26% 00:25:57.272 nvme6n1: ios=17806/0, merge=0/0, ticks=1234896/0, in_queue=1234896, util=98.38% 00:25:57.272 nvme7n1: ios=10748/0, merge=0/0, ticks=1235939/0, in_queue=1235939, util=98.84% 00:25:57.272 nvme8n1: ios=14134/0, merge=0/0, ticks=1233253/0, in_queue=1233253, util=99.04% 00:25:57.272 nvme9n1: ios=19049/0, merge=0/0, ticks=1237355/0, in_queue=1237355, util=99.22% 00:25:57.272 06:54:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:57.272 [global] 00:25:57.272 thread=1 00:25:57.272 invalidate=1 00:25:57.272 rw=randwrite 00:25:57.272 time_based=1 00:25:57.272 runtime=10 00:25:57.272 ioengine=libaio 00:25:57.272 direct=1 00:25:57.272 bs=262144 00:25:57.272 iodepth=64 00:25:57.272 norandommap=1 00:25:57.272 numjobs=1 00:25:57.272 00:25:57.272 [job0] 00:25:57.272 filename=/dev/nvme0n1 00:25:57.272 [job1] 00:25:57.272 filename=/dev/nvme10n1 00:25:57.272 [job2] 00:25:57.272 filename=/dev/nvme1n1 00:25:57.272 [job3] 00:25:57.272 filename=/dev/nvme2n1 00:25:57.272 [job4] 00:25:57.272 filename=/dev/nvme3n1 00:25:57.272 [job5] 00:25:57.272 filename=/dev/nvme4n1 00:25:57.272 [job6] 00:25:57.272 filename=/dev/nvme5n1 00:25:57.272 [job7] 00:25:57.272 filename=/dev/nvme6n1 00:25:57.272 [job8] 00:25:57.272 filename=/dev/nvme7n1 00:25:57.272 [job9] 00:25:57.272 filename=/dev/nvme8n1 00:25:57.272 [job10] 00:25:57.272 filename=/dev/nvme9n1 00:25:57.272 Could not set queue depth (nvme0n1) 00:25:57.272 Could not set queue depth (nvme10n1) 00:25:57.272 Could not set queue depth (nvme1n1) 00:25:57.272 Could not set queue depth (nvme2n1) 00:25:57.272 Could not set queue depth (nvme3n1) 00:25:57.272 Could not set queue depth (nvme4n1) 00:25:57.272 Could not set queue depth (nvme5n1) 00:25:57.272 Could not set queue depth (nvme6n1) 00:25:57.272 Could not set queue depth (nvme7n1) 00:25:57.272 Could not set queue depth (nvme8n1) 00:25:57.272 Could not set queue depth (nvme9n1) 00:25:57.272 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.272 fio-3.35 00:25:57.272 Starting 11 threads 00:26:07.235 00:26:07.235 job0: (groupid=0, jobs=1): err= 0: pid=703969: Mon Jul 15 06:54:53 2024 00:26:07.235 write: IOPS=651, BW=163MiB/s (171MB/s)(1661MiB/10193msec); 0 zone resets 00:26:07.235 slat (usec): min=22, max=39348, avg=1134.06, stdev=2934.67 00:26:07.235 clat (usec): min=1658, max=389168, avg=96963.67, stdev=55019.70 00:26:07.235 lat (msec): min=2, max=389, avg=98.10, stdev=55.76 00:26:07.235 clat percentiles (msec): 00:26:07.235 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 52], 00:26:07.235 | 30.00th=[ 65], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 104], 00:26:07.235 | 70.00th=[ 121], 80.00th=[ 140], 90.00th=[ 176], 95.00th=[ 192], 00:26:07.235 | 99.00th=[ 251], 99.50th=[ 288], 99.90th=[ 363], 99.95th=[ 376], 00:26:07.235 | 99.99th=[ 388] 00:26:07.235 bw ( KiB/s): min=82432, max=296960, per=12.57%, avg=168450.45, stdev=55629.53, samples=20 00:26:07.235 iops : min= 322, max= 1160, avg=658.00, stdev=217.29, samples=20 00:26:07.235 lat (msec) : 2=0.05%, 4=0.35%, 10=2.44%, 20=3.24%, 50=12.82% 00:26:07.235 lat (msec) : 100=38.97%, 250=41.13%, 500=1.01% 00:26:07.235 cpu : usr=2.21%, sys=2.33%, ctx=3443, majf=0, minf=1 00:26:07.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:07.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.235 issued rwts: total=0,6644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.235 job1: (groupid=0, jobs=1): err= 0: pid=703981: Mon Jul 15 06:54:53 2024 00:26:07.235 write: IOPS=446, BW=112MiB/s (117MB/s)(1137MiB/10194msec); 0 zone resets 00:26:07.235 slat (usec): min=20, max=135390, avg=1724.47, stdev=4855.27 00:26:07.235 clat (usec): min=1194, max=378554, avg=141589.96, stdev=74961.04 00:26:07.235 lat (usec): min=1245, max=378593, avg=143314.43, stdev=76026.05 00:26:07.235 clat percentiles (msec): 00:26:07.235 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 63], 00:26:07.235 | 30.00th=[ 96], 40.00th=[ 126], 50.00th=[ 142], 60.00th=[ 159], 00:26:07.235 | 70.00th=[ 182], 80.00th=[ 207], 90.00th=[ 243], 95.00th=[ 271], 00:26:07.235 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:26:07.235 | 99.99th=[ 380] 00:26:07.235 bw ( KiB/s): min=53141, max=216576, per=8.57%, avg=114825.80, stdev=41490.67, samples=20 00:26:07.235 iops : min= 207, max= 846, avg=448.50, stdev=162.12, samples=20 00:26:07.235 lat (msec) : 2=0.18%, 4=0.44%, 10=1.03%, 20=2.24%, 50=10.44% 00:26:07.235 lat (msec) : 100=16.79%, 250=61.46%, 500=7.41% 00:26:07.235 cpu : usr=1.46%, sys=1.55%, ctx=2277, majf=0, minf=1 00:26:07.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:07.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.235 issued rwts: total=0,4549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.235 job2: (groupid=0, jobs=1): err= 0: pid=703988: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=408, BW=102MiB/s (107MB/s)(1040MiB/10194msec); 0 zone resets 00:26:07.236 slat (usec): min=21, max=72873, avg=1873.65, stdev=4433.34 00:26:07.236 clat (usec): min=1203, max=345396, avg=154732.41, stdev=58762.71 00:26:07.236 lat (usec): min=1266, max=345459, avg=156606.06, stdev=59501.32 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 5], 5.00th=[ 63], 10.00th=[ 83], 20.00th=[ 107], 00:26:07.236 | 30.00th=[ 130], 40.00th=[ 142], 50.00th=[ 153], 60.00th=[ 165], 00:26:07.236 | 70.00th=[ 184], 80.00th=[ 201], 90.00th=[ 226], 95.00th=[ 262], 00:26:07.236 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 342], 00:26:07.236 | 99.99th=[ 347] 00:26:07.236 bw ( KiB/s): min=61440, max=176128, per=7.83%, avg=104910.70, stdev=26943.93, samples=20 00:26:07.236 iops : min= 240, max= 688, avg=409.75, stdev=105.26, samples=20 00:26:07.236 lat (msec) : 2=0.38%, 4=0.60%, 10=1.11%, 20=0.53%, 50=1.11% 00:26:07.236 lat (msec) : 100=11.46%, 250=79.00%, 500=5.82% 00:26:07.236 cpu : usr=1.23%, sys=1.38%, ctx=1873, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job3: (groupid=0, jobs=1): err= 0: pid=703989: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=514, BW=129MiB/s (135MB/s)(1300MiB/10107msec); 0 zone resets 00:26:07.236 slat (usec): min=21, max=144416, avg=1581.70, stdev=4694.91 00:26:07.236 clat (usec): min=1097, max=314237, avg=122719.10, stdev=60794.13 00:26:07.236 lat (usec): min=1146, max=322834, avg=124300.80, stdev=61615.27 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 71], 00:26:07.236 | 30.00th=[ 93], 40.00th=[ 111], 50.00th=[ 126], 60.00th=[ 140], 00:26:07.236 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 209], 95.00th=[ 224], 00:26:07.236 | 99.00th=[ 262], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 309], 00:26:07.236 | 99.99th=[ 313] 00:26:07.236 bw ( KiB/s): min=71680, max=275968, per=9.81%, avg=131485.50, stdev=48238.84, samples=20 00:26:07.236 iops : min= 280, max= 1078, avg=513.60, stdev=188.44, samples=20 00:26:07.236 lat (msec) : 2=0.19%, 4=0.83%, 10=1.92%, 20=2.89%, 50=9.64% 00:26:07.236 lat (msec) : 100=17.00%, 250=66.24%, 500=1.29% 00:26:07.236 cpu : usr=1.45%, sys=1.60%, ctx=2420, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,5199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job4: (groupid=0, jobs=1): err= 0: pid=703990: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=461, BW=115MiB/s (121MB/s)(1167MiB/10107msec); 0 zone resets 00:26:07.236 slat (usec): min=21, max=69222, avg=1640.03, stdev=4307.99 00:26:07.236 clat (usec): min=1108, max=267816, avg=136765.59, stdev=62740.17 00:26:07.236 lat (usec): min=1138, max=267855, avg=138405.62, stdev=63563.35 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 34], 20.00th=[ 80], 00:26:07.236 | 30.00th=[ 111], 40.00th=[ 129], 50.00th=[ 150], 60.00th=[ 165], 00:26:07.236 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 211], 95.00th=[ 222], 00:26:07.236 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 264], 99.95th=[ 266], 00:26:07.236 | 99.99th=[ 268] 00:26:07.236 bw ( KiB/s): min=81920, max=169472, per=8.80%, avg=117915.05, stdev=25182.23, samples=20 00:26:07.236 iops : min= 320, max= 662, avg=460.60, stdev=98.36, samples=20 00:26:07.236 lat (msec) : 2=0.51%, 4=1.31%, 10=2.06%, 20=2.48%, 50=5.95% 00:26:07.236 lat (msec) : 100=12.91%, 250=73.87%, 500=0.90% 00:26:07.236 cpu : usr=1.43%, sys=1.46%, ctx=2423, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job5: (groupid=0, jobs=1): err= 0: pid=703991: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=419, BW=105MiB/s (110MB/s)(1060MiB/10105msec); 0 zone resets 00:26:07.236 slat (usec): min=23, max=107996, avg=1994.38, stdev=4420.30 00:26:07.236 clat (msec): min=2, max=369, avg=150.50, stdev=50.87 00:26:07.236 lat (msec): min=2, max=369, avg=152.50, stdev=51.53 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 27], 5.00th=[ 71], 10.00th=[ 90], 20.00th=[ 107], 00:26:07.236 | 30.00th=[ 124], 40.00th=[ 140], 50.00th=[ 146], 60.00th=[ 163], 00:26:07.236 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 211], 95.00th=[ 222], 00:26:07.236 | 99.00th=[ 279], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 359], 00:26:07.236 | 99.99th=[ 372] 00:26:07.236 bw ( KiB/s): min=81408, max=176128, per=7.97%, avg=106908.65, stdev=23398.19, samples=20 00:26:07.236 iops : min= 318, max= 688, avg=417.55, stdev=91.41, samples=20 00:26:07.236 lat (msec) : 4=0.12%, 10=0.07%, 20=0.42%, 50=2.76%, 100=10.33% 00:26:07.236 lat (msec) : 250=84.81%, 500=1.49% 00:26:07.236 cpu : usr=1.26%, sys=1.35%, ctx=1727, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job6: (groupid=0, jobs=1): err= 0: pid=703992: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=476, BW=119MiB/s (125MB/s)(1215MiB/10198msec); 0 zone resets 00:26:07.236 slat (usec): min=21, max=61413, avg=1362.28, stdev=3566.29 00:26:07.236 clat (msec): min=2, max=343, avg=132.45, stdev=56.49 00:26:07.236 lat (msec): min=2, max=343, avg=133.81, stdev=57.12 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 61], 20.00th=[ 84], 00:26:07.236 | 30.00th=[ 104], 40.00th=[ 115], 50.00th=[ 131], 60.00th=[ 146], 00:26:07.236 | 70.00th=[ 165], 80.00th=[ 182], 90.00th=[ 209], 95.00th=[ 226], 00:26:07.236 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 338], 99.95th=[ 342], 00:26:07.236 | 99.99th=[ 342] 00:26:07.236 bw ( KiB/s): min=71536, max=180224, per=9.16%, avg=122832.55, stdev=27999.60, samples=20 00:26:07.236 iops : min= 279, max= 704, avg=479.75, stdev=109.44, samples=20 00:26:07.236 lat (msec) : 4=0.10%, 10=0.68%, 20=1.73%, 50=5.60%, 100=20.41% 00:26:07.236 lat (msec) : 250=70.62%, 500=0.86% 00:26:07.236 cpu : usr=1.44%, sys=1.63%, ctx=2741, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job7: (groupid=0, jobs=1): err= 0: pid=703993: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=462, BW=116MiB/s (121MB/s)(1179MiB/10192msec); 0 zone resets 00:26:07.236 slat (usec): min=18, max=61017, avg=1473.69, stdev=4052.20 00:26:07.236 clat (msec): min=2, max=391, avg=136.75, stdev=69.42 00:26:07.236 lat (msec): min=2, max=391, avg=138.22, stdev=70.36 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 78], 00:26:07.236 | 30.00th=[ 90], 40.00th=[ 118], 50.00th=[ 144], 60.00th=[ 161], 00:26:07.236 | 70.00th=[ 171], 80.00th=[ 190], 90.00th=[ 213], 95.00th=[ 268], 00:26:07.236 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 376], 99.95th=[ 376], 00:26:07.236 | 99.99th=[ 393] 00:26:07.236 bw ( KiB/s): min=61317, max=207360, per=8.88%, avg=119100.70, stdev=41456.68, samples=20 00:26:07.236 iops : min= 239, max= 810, avg=465.20, stdev=161.98, samples=20 00:26:07.236 lat (msec) : 4=0.19%, 10=1.23%, 20=2.76%, 50=7.51%, 100=22.96% 00:26:07.236 lat (msec) : 250=58.82%, 500=6.53% 00:26:07.236 cpu : usr=1.52%, sys=1.71%, ctx=2694, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job8: (groupid=0, jobs=1): err= 0: pid=703994: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1002MiB/10090msec); 0 zone resets 00:26:07.236 slat (usec): min=25, max=123604, avg=1858.80, stdev=5507.89 00:26:07.236 clat (msec): min=7, max=348, avg=159.07, stdev=60.78 00:26:07.236 lat (msec): min=7, max=348, avg=160.93, stdev=61.62 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 79], 20.00th=[ 118], 00:26:07.236 | 30.00th=[ 136], 40.00th=[ 146], 50.00th=[ 159], 60.00th=[ 171], 00:26:07.236 | 70.00th=[ 184], 80.00th=[ 203], 90.00th=[ 232], 95.00th=[ 268], 00:26:07.236 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 334], 00:26:07.236 | 99.99th=[ 351] 00:26:07.236 bw ( KiB/s): min=65536, max=131072, per=7.54%, avg=101020.00, stdev=20200.33, samples=20 00:26:07.236 iops : min= 256, max= 512, avg=394.55, stdev=78.90, samples=20 00:26:07.236 lat (msec) : 10=0.12%, 20=2.07%, 50=3.27%, 100=7.71%, 250=79.82% 00:26:07.236 lat (msec) : 500=7.01% 00:26:07.236 cpu : usr=1.45%, sys=1.18%, ctx=1951, majf=0, minf=1 00:26:07.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:07.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.236 issued rwts: total=0,4009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.236 job9: (groupid=0, jobs=1): err= 0: pid=703995: Mon Jul 15 06:54:53 2024 00:26:07.236 write: IOPS=440, BW=110MiB/s (115MB/s)(1112MiB/10110msec); 0 zone resets 00:26:07.236 slat (usec): min=26, max=80389, avg=1788.61, stdev=4019.94 00:26:07.236 clat (msec): min=5, max=267, avg=143.50, stdev=46.73 00:26:07.236 lat (msec): min=6, max=267, avg=145.29, stdev=47.27 00:26:07.236 clat percentiles (msec): 00:26:07.236 | 1.00th=[ 20], 5.00th=[ 50], 10.00th=[ 73], 20.00th=[ 113], 00:26:07.236 | 30.00th=[ 132], 40.00th=[ 140], 50.00th=[ 146], 60.00th=[ 153], 00:26:07.236 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 199], 95.00th=[ 226], 00:26:07.236 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 264], 99.95th=[ 266], 00:26:07.236 | 99.99th=[ 268] 00:26:07.236 bw ( KiB/s): min=71168, max=188928, per=8.38%, avg=112283.50, stdev=25010.40, samples=20 00:26:07.236 iops : min= 278, max= 738, avg=438.55, stdev=97.73, samples=20 00:26:07.236 lat (msec) : 10=0.16%, 20=0.92%, 50=4.02%, 100=9.13%, 250=84.78% 00:26:07.236 lat (msec) : 500=0.99% 00:26:07.236 cpu : usr=1.36%, sys=1.49%, ctx=1937, majf=0, minf=1 00:26:07.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:07.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.237 issued rwts: total=0,4449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.237 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.237 job10: (groupid=0, jobs=1): err= 0: pid=703996: Mon Jul 15 06:54:53 2024 00:26:07.237 write: IOPS=584, BW=146MiB/s (153MB/s)(1477MiB/10112msec); 0 zone resets 00:26:07.237 slat (usec): min=17, max=150375, avg=1246.65, stdev=4450.27 00:26:07.237 clat (usec): min=1320, max=404242, avg=108164.28, stdev=76160.71 00:26:07.237 lat (usec): min=1357, max=419379, avg=109410.93, stdev=77055.37 00:26:07.237 clat percentiles (msec): 00:26:07.237 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 44], 00:26:07.237 | 30.00th=[ 46], 40.00th=[ 62], 50.00th=[ 104], 60.00th=[ 128], 00:26:07.237 | 70.00th=[ 144], 80.00th=[ 178], 90.00th=[ 222], 95.00th=[ 247], 00:26:07.237 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 388], 99.95th=[ 405], 00:26:07.237 | 99.99th=[ 405] 00:26:07.237 bw ( KiB/s): min=60928, max=353280, per=11.16%, avg=149658.90, stdev=79016.12, samples=20 00:26:07.237 iops : min= 238, max= 1380, avg=584.55, stdev=308.69, samples=20 00:26:07.237 lat (msec) : 2=0.25%, 4=1.08%, 10=3.01%, 20=4.50%, 50=27.69% 00:26:07.237 lat (msec) : 100=12.71%, 250=46.27%, 500=4.48% 00:26:07.237 cpu : usr=1.94%, sys=1.97%, ctx=3110, majf=0, minf=1 00:26:07.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:07.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.237 issued rwts: total=0,5909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.237 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.237 00:26:07.237 Run status group 0 (all jobs): 00:26:07.237 WRITE: bw=1309MiB/s (1373MB/s), 99.3MiB/s-163MiB/s (104MB/s-171MB/s), io=13.0GiB (14.0GB), run=10090-10198msec 00:26:07.237 00:26:07.237 Disk stats (read/write): 00:26:07.237 nvme0n1: ios=45/13275, merge=0/0, ticks=966/1242814, in_queue=1243780, util=99.88% 00:26:07.237 nvme10n1: ios=47/9074, merge=0/0, ticks=1159/1240397, in_queue=1241556, util=99.99% 00:26:07.237 nvme1n1: ios=41/8308, merge=0/0, ticks=1243/1240263, in_queue=1241506, util=100.00% 00:26:07.237 nvme2n1: ios=46/10208, merge=0/0, ticks=4094/1182057, in_queue=1186151, util=100.00% 00:26:07.237 nvme3n1: ios=42/9149, merge=0/0, ticks=1036/1216139, in_queue=1217175, util=100.00% 00:26:07.237 nvme4n1: ios=0/8293, merge=0/0, ticks=0/1214391, in_queue=1214391, util=98.12% 00:26:07.237 nvme5n1: ios=43/9703, merge=0/0, ticks=1294/1244192, in_queue=1245486, util=100.00% 00:26:07.237 nvme6n1: ios=0/9410, merge=0/0, ticks=0/1245173, in_queue=1245173, util=98.37% 00:26:07.237 nvme7n1: ios=45/7792, merge=0/0, ticks=3065/1201858, in_queue=1204923, util=100.00% 00:26:07.237 nvme8n1: ios=34/8718, merge=0/0, ticks=964/1211538, in_queue=1212502, util=100.00% 00:26:07.237 nvme9n1: ios=48/11634, merge=0/0, ticks=4767/1205125, in_queue=1209892, util=100.00% 00:26:07.237 06:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:07.237 06:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:07.237 06:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.237 06:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:07.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:07.237 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:07.237 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.237 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:07.495 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:07.495 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:07.495 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.495 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.495 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:26:07.495 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.496 06:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.755 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:07.755 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:07.755 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:08.014 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:08.014 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:26:08.274 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:08.275 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.275 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:08.596 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.596 06:54:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:08.596 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.596 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:08.597 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.597 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.597 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.597 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.597 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:08.883 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.883 rmmod nvme_tcp 00:26:08.883 rmmod nvme_fabrics 00:26:08.883 rmmod nvme_keyring 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 697927 ']' 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 697927 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 697927 ']' 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 697927 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 697927 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 697927' 00:26:08.883 killing process with pid 697927 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 697927 00:26:08.883 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 697927 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.451 06:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.356 06:54:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.356 00:26:11.356 real 1m0.654s 00:26:11.356 user 3m25.243s 00:26:11.356 sys 0m23.361s 00:26:11.356 06:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:11.356 06:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.356 ************************************ 00:26:11.356 END TEST nvmf_multiconnection 00:26:11.356 ************************************ 00:26:11.356 06:54:58 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.356 06:54:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:11.356 06:54:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:11.356 06:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.356 ************************************ 00:26:11.356 START TEST nvmf_initiator_timeout 00:26:11.356 ************************************ 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.356 * Looking for test storage... 00:26:11.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.356 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.357 06:54:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:13.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:13.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:13.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:13.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.894 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.895 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.895 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.895 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.895 06:55:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:26:13.895 00:26:13.895 --- 10.0.0.2 ping statistics --- 00:26:13.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.895 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:13.895 00:26:13.895 --- 10.0.0.1 ping statistics --- 00:26:13.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.895 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=707319 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 707319 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 707319 ']' 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 [2024-07-15 06:55:01.093348] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:13.895 [2024-07-15 06:55:01.093444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.895 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.895 [2024-07-15 06:55:01.163706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.895 [2024-07-15 06:55:01.254068] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.895 [2024-07-15 06:55:01.254132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.895 [2024-07-15 06:55:01.254159] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.895 [2024-07-15 06:55:01.254172] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.895 [2024-07-15 06:55:01.254184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.895 [2024-07-15 06:55:01.254279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.895 [2024-07-15 06:55:01.254334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.895 [2024-07-15 06:55:01.254402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.895 [2024-07-15 06:55:01.254404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 Malloc0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 Delay0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 [2024-07-15 06:55:01.442979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.895 [2024-07-15 06:55:01.471252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.895 06:55:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:14.463 06:55:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:14.463 06:55:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:26:14.463 06:55:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.463 06:55:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:14.463 06:55:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=707628 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:16.991 06:55:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:16.991 [global] 00:26:16.991 thread=1 00:26:16.991 invalidate=1 00:26:16.991 rw=write 00:26:16.991 time_based=1 00:26:16.991 runtime=60 00:26:16.991 ioengine=libaio 00:26:16.991 direct=1 00:26:16.991 bs=4096 00:26:16.991 iodepth=1 00:26:16.991 norandommap=0 00:26:16.991 numjobs=1 00:26:16.991 00:26:16.991 verify_dump=1 00:26:16.991 verify_backlog=512 00:26:16.991 verify_state_save=0 00:26:16.991 do_verify=1 00:26:16.991 verify=crc32c-intel 00:26:16.991 [job0] 00:26:16.991 filename=/dev/nvme0n1 00:26:16.991 Could not set queue depth (nvme0n1) 00:26:16.991 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:16.991 fio-3.35 00:26:16.991 Starting 1 thread 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.525 true 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.525 true 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.525 true 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.525 true 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.525 06:55:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.809 true 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.809 true 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.809 true 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.809 true 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:22.809 06:55:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 707628 00:27:19.059 00:27:19.059 job0: (groupid=0, jobs=1): err= 0: pid=707815: Mon Jul 15 06:56:04 2024 00:27:19.059 read: IOPS=74, BW=299KiB/s (306kB/s)(17.5MiB/60027msec) 00:27:19.059 slat (usec): min=4, max=7805, avg=17.81, stdev=116.80 00:27:19.059 clat (usec): min=282, max=42089, avg=3885.32, stdev=11502.65 00:27:19.059 lat (usec): min=288, max=48977, avg=3903.13, stdev=11511.15 00:27:19.059 clat percentiles (usec): 00:27:19.059 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:27:19.059 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 359], 60.00th=[ 379], 00:27:19.059 | 70.00th=[ 416], 80.00th=[ 461], 90.00th=[ 510], 95.00th=[41157], 00:27:19.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:19.059 | 99.99th=[42206] 00:27:19.059 write: IOPS=76, BW=307KiB/s (314kB/s)(18.0MiB/60027msec); 0 zone resets 00:27:19.059 slat (nsec): min=5779, max=70327, avg=13702.31, stdev=8527.48 00:27:19.059 clat (usec): min=189, max=41297k, avg=9204.51, stdev=608362.10 00:27:19.059 lat (usec): min=196, max=41297k, avg=9218.21, stdev=608362.18 00:27:19.059 clat percentiles (usec): 00:27:19.059 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 00:27:19.059 | 20.00th=[ 223], 30.00th=[ 227], 40.00th=[ 229], 00:27:19.059 | 50.00th=[ 233], 60.00th=[ 237], 70.00th=[ 243], 00:27:19.059 | 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 310], 00:27:19.059 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 441], 00:27:19.059 | 99.95th=[ 445], 99.99th=[17112761] 00:27:19.059 bw ( KiB/s): min= 504, max= 8192, per=100.00%, avg=4608.00, stdev=3112.98, samples=8 00:27:19.059 iops : min= 126, max= 2048, avg=1152.00, stdev=778.24, samples=8 00:27:19.059 lat (usec) : 250=40.07%, 500=54.13%, 750=1.57% 00:27:19.059 lat (msec) : 50=4.21%, >=2000=0.01% 00:27:19.059 cpu : usr=0.13%, sys=0.24%, ctx=9096, majf=0, minf=2 00:27:19.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.059 issued rwts: total=4486,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:19.059 00:27:19.059 Run status group 0 (all jobs): 00:27:19.059 READ: bw=299KiB/s (306kB/s), 299KiB/s-299KiB/s (306kB/s-306kB/s), io=17.5MiB (18.4MB), run=60027-60027msec 00:27:19.059 WRITE: bw=307KiB/s (314kB/s), 307KiB/s-307KiB/s (314kB/s-314kB/s), io=18.0MiB (18.9MB), run=60027-60027msec 00:27:19.059 00:27:19.059 Disk stats (read/write): 00:27:19.059 nvme0n1: ios=4581/4608, merge=0/0, ticks=18468/1076, in_queue=19544, util=99.73% 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:19.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:19.059 nvmf hotplug test: fio successful as expected 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:19.059 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.060 rmmod nvme_tcp 00:27:19.060 rmmod nvme_fabrics 00:27:19.060 rmmod nvme_keyring 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 707319 ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 707319 ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 707319' 00:27:19.060 killing process with pid 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 707319 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.060 06:56:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.319 06:56:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.579 00:27:19.579 real 1m8.035s 00:27:19.579 user 4m10.517s 00:27:19.579 sys 0m6.418s 00:27:19.579 06:56:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.579 06:56:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:19.579 ************************************ 00:27:19.579 END TEST nvmf_initiator_timeout 00:27:19.579 ************************************ 00:27:19.579 06:56:06 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:19.579 06:56:06 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:19.579 06:56:06 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:19.579 06:56:06 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.579 06:56:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:21.482 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:21.482 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:21.482 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:21.482 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:21.482 06:56:08 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:21.482 06:56:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:21.482 06:56:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:21.482 06:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.482 ************************************ 00:27:21.482 START TEST nvmf_perf_adq 00:27:21.482 ************************************ 00:27:21.482 06:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:21.482 * Looking for test storage... 00:27:21.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.482 06:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:23.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:23.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.382 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:23.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:23.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:23.383 06:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:23.948 06:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:25.858 06:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:31.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:31.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:31.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.137 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:31.138 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:31.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:27:31.138 00:27:31.138 --- 10.0.0.2 ping statistics --- 00:27:31.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.138 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:31.138 00:27:31.138 --- 10.0.0.1 ping statistics --- 00:27:31.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.138 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=719316 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 719316 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 719316 ']' 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.138 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.138 [2024-07-15 06:56:18.699186] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:31.138 [2024-07-15 06:56:18.699286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.138 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.396 [2024-07-15 06:56:18.769423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.396 [2024-07-15 06:56:18.856508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.396 [2024-07-15 06:56:18.856559] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.396 [2024-07-15 06:56:18.856582] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.396 [2024-07-15 06:56:18.856594] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.396 [2024-07-15 06:56:18.856604] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.396 [2024-07-15 06:56:18.856667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.396 [2024-07-15 06:56:18.856728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.396 [2024-07-15 06:56:18.856792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.396 [2024-07-15 06:56:18.856794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.396 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:31.396 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:31.396 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.397 06:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 [2024-07-15 06:56:19.084846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 Malloc1 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.654 [2024-07-15 06:56:19.138073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=719346 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:31.654 06:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:31.654 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:33.561 "tick_rate": 2700000000, 00:27:33.561 "poll_groups": [ 00:27:33.561 { 00:27:33.561 "name": "nvmf_tgt_poll_group_000", 00:27:33.561 "admin_qpairs": 1, 00:27:33.561 "io_qpairs": 1, 00:27:33.561 "current_admin_qpairs": 1, 00:27:33.561 "current_io_qpairs": 1, 00:27:33.561 "pending_bdev_io": 0, 00:27:33.561 "completed_nvme_io": 20178, 00:27:33.561 "transports": [ 00:27:33.561 { 00:27:33.561 "trtype": "TCP" 00:27:33.561 } 00:27:33.561 ] 00:27:33.561 }, 00:27:33.561 { 00:27:33.561 "name": "nvmf_tgt_poll_group_001", 00:27:33.561 "admin_qpairs": 0, 00:27:33.561 "io_qpairs": 1, 00:27:33.561 "current_admin_qpairs": 0, 00:27:33.561 "current_io_qpairs": 1, 00:27:33.561 "pending_bdev_io": 0, 00:27:33.561 "completed_nvme_io": 20481, 00:27:33.561 "transports": [ 00:27:33.561 { 00:27:33.561 "trtype": "TCP" 00:27:33.561 } 00:27:33.561 ] 00:27:33.561 }, 00:27:33.561 { 00:27:33.561 "name": "nvmf_tgt_poll_group_002", 00:27:33.561 "admin_qpairs": 0, 00:27:33.561 "io_qpairs": 1, 00:27:33.561 "current_admin_qpairs": 0, 00:27:33.561 "current_io_qpairs": 1, 00:27:33.561 "pending_bdev_io": 0, 00:27:33.561 "completed_nvme_io": 20353, 00:27:33.561 "transports": [ 00:27:33.561 { 00:27:33.561 "trtype": "TCP" 00:27:33.561 } 00:27:33.561 ] 00:27:33.561 }, 00:27:33.561 { 00:27:33.561 "name": "nvmf_tgt_poll_group_003", 00:27:33.561 "admin_qpairs": 0, 00:27:33.561 "io_qpairs": 1, 00:27:33.561 "current_admin_qpairs": 0, 00:27:33.561 "current_io_qpairs": 1, 00:27:33.561 "pending_bdev_io": 0, 00:27:33.561 "completed_nvme_io": 20417, 00:27:33.561 "transports": [ 00:27:33.561 { 00:27:33.561 "trtype": "TCP" 00:27:33.561 } 00:27:33.561 ] 00:27:33.561 } 00:27:33.561 ] 00:27:33.561 }' 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:33.561 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:33.819 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:33.819 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:33.819 06:56:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 719346 00:27:41.962 Initializing NVMe Controllers 00:27:41.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:41.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:41.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:41.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:41.962 Initialization complete. Launching workers. 00:27:41.962 ======================================================== 00:27:41.962 Latency(us) 00:27:41.962 Device Information : IOPS MiB/s Average min max 00:27:41.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10483.13 40.95 6107.05 4988.55 7788.24 00:27:41.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10547.13 41.20 6069.74 4966.61 7734.41 00:27:41.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10497.63 41.01 6097.12 3972.63 7837.27 00:27:41.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10286.43 40.18 6222.06 4154.15 7827.18 00:27:41.962 ======================================================== 00:27:41.962 Total : 41814.33 163.34 6123.44 3972.63 7837.27 00:27:41.962 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.962 rmmod nvme_tcp 00:27:41.962 rmmod nvme_fabrics 00:27:41.962 rmmod nvme_keyring 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 719316 ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 719316 ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 719316' 00:27:41.962 killing process with pid 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 719316 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.962 06:56:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.503 06:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.503 06:56:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:44.503 06:56:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:44.762 06:56:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:46.665 06:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.942 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.943 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.943 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.943 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:51.943 00:27:51.943 --- 10.0.0.2 ping statistics --- 00:27:51.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.943 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:51.943 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:27:51.944 00:27:51.944 --- 10.0.0.1 ping statistics --- 00:27:51.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.944 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:51.944 net.core.busy_poll = 1 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:51.944 net.core.busy_read = 1 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=721955 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 721955 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 721955 ']' 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.944 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.944 [2024-07-15 06:56:39.521757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:51.944 [2024-07-15 06:56:39.521841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.204 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.204 [2024-07-15 06:56:39.595950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.204 [2024-07-15 06:56:39.685589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.204 [2024-07-15 06:56:39.685664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.204 [2024-07-15 06:56:39.685677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.204 [2024-07-15 06:56:39.685688] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.204 [2024-07-15 06:56:39.685697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.204 [2024-07-15 06:56:39.685783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.204 [2024-07-15 06:56:39.685847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.204 [2024-07-15 06:56:39.685914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.204 [2024-07-15 06:56:39.685918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.204 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 [2024-07-15 06:56:39.922579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 Malloc1 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.464 [2024-07-15 06:56:39.973990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=722067 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:52.464 06:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:52.464 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.368 06:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:54.625 "tick_rate": 2700000000, 00:27:54.625 "poll_groups": [ 00:27:54.625 { 00:27:54.625 "name": "nvmf_tgt_poll_group_000", 00:27:54.625 "admin_qpairs": 1, 00:27:54.625 "io_qpairs": 1, 00:27:54.625 "current_admin_qpairs": 1, 00:27:54.625 "current_io_qpairs": 1, 00:27:54.625 "pending_bdev_io": 0, 00:27:54.625 "completed_nvme_io": 25689, 00:27:54.625 "transports": [ 00:27:54.625 { 00:27:54.625 "trtype": "TCP" 00:27:54.625 } 00:27:54.625 ] 00:27:54.625 }, 00:27:54.625 { 00:27:54.625 "name": "nvmf_tgt_poll_group_001", 00:27:54.625 "admin_qpairs": 0, 00:27:54.625 "io_qpairs": 3, 00:27:54.625 "current_admin_qpairs": 0, 00:27:54.625 "current_io_qpairs": 3, 00:27:54.625 "pending_bdev_io": 0, 00:27:54.625 "completed_nvme_io": 25524, 00:27:54.625 "transports": [ 00:27:54.625 { 00:27:54.625 "trtype": "TCP" 00:27:54.625 } 00:27:54.625 ] 00:27:54.625 }, 00:27:54.625 { 00:27:54.625 "name": "nvmf_tgt_poll_group_002", 00:27:54.625 "admin_qpairs": 0, 00:27:54.625 "io_qpairs": 0, 00:27:54.625 "current_admin_qpairs": 0, 00:27:54.625 "current_io_qpairs": 0, 00:27:54.625 "pending_bdev_io": 0, 00:27:54.625 "completed_nvme_io": 0, 00:27:54.625 "transports": [ 00:27:54.625 { 00:27:54.625 "trtype": "TCP" 00:27:54.625 } 00:27:54.625 ] 00:27:54.625 }, 00:27:54.625 { 00:27:54.625 "name": "nvmf_tgt_poll_group_003", 00:27:54.625 "admin_qpairs": 0, 00:27:54.625 "io_qpairs": 0, 00:27:54.625 "current_admin_qpairs": 0, 00:27:54.625 "current_io_qpairs": 0, 00:27:54.625 "pending_bdev_io": 0, 00:27:54.625 "completed_nvme_io": 0, 00:27:54.625 "transports": [ 00:27:54.625 { 00:27:54.625 "trtype": "TCP" 00:27:54.625 } 00:27:54.625 ] 00:27:54.625 } 00:27:54.625 ] 00:27:54.625 }' 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:54.625 06:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:54.625 06:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:54.625 06:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:54.625 06:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 722067 00:28:02.741 Initializing NVMe Controllers 00:28:02.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:02.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:02.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:02.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:02.741 Initialization complete. Launching workers. 00:28:02.741 ======================================================== 00:28:02.741 Latency(us) 00:28:02.741 Device Information : IOPS MiB/s Average min max 00:28:02.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4352.90 17.00 14706.60 2551.88 61676.69 00:28:02.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4493.40 17.55 14292.70 1778.09 61928.77 00:28:02.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13560.10 52.97 4720.43 1744.58 6758.52 00:28:02.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4753.30 18.57 13467.79 1810.70 60297.89 00:28:02.741 ======================================================== 00:28:02.741 Total : 27159.69 106.09 9435.49 1744.58 61928.77 00:28:02.741 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.741 rmmod nvme_tcp 00:28:02.741 rmmod nvme_fabrics 00:28:02.741 rmmod nvme_keyring 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 721955 ']' 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 721955 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 721955 ']' 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 721955 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 721955 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 721955' 00:28:02.741 killing process with pid 721955 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 721955 00:28:02.741 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 721955 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.001 06:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.335 06:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.335 06:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:06.335 00:28:06.335 real 0m44.557s 00:28:06.335 user 2m38.371s 00:28:06.335 sys 0m9.746s 00:28:06.335 06:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.335 06:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.335 ************************************ 00:28:06.335 END TEST nvmf_perf_adq 00:28:06.335 ************************************ 00:28:06.335 06:56:53 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:06.335 06:56:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:06.335 06:56:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.335 06:56:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.335 ************************************ 00:28:06.335 START TEST nvmf_shutdown 00:28:06.335 ************************************ 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:06.335 * Looking for test storage... 00:28:06.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.335 ************************************ 00:28:06.335 START TEST nvmf_shutdown_tc1 00:28:06.335 ************************************ 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.335 06:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.238 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:28:08.239 00:28:08.239 --- 10.0.0.2 ping statistics --- 00:28:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.239 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:28:08.239 00:28:08.239 --- 10.0.0.1 ping statistics --- 00:28:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.239 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=725278 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 725278 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 725278 ']' 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:08.239 06:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.239 [2024-07-15 06:56:55.825145] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:08.239 [2024-07-15 06:56:55.825226] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.496 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.496 [2024-07-15 06:56:55.893526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.496 [2024-07-15 06:56:55.983492] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.496 [2024-07-15 06:56:55.983550] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.496 [2024-07-15 06:56:55.983579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.496 [2024-07-15 06:56:55.983591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.496 [2024-07-15 06:56:55.983601] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.496 [2024-07-15 06:56:55.983690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.496 [2024-07-15 06:56:55.983744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.496 [2024-07-15 06:56:55.983793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.496 [2024-07-15 06:56:55.983795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.496 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:08.496 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:08.496 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.496 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.496 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.755 [2024-07-15 06:56:56.133750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.755 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.755 Malloc1 00:28:08.755 [2024-07-15 06:56:56.217467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.755 Malloc2 00:28:08.755 Malloc3 00:28:08.755 Malloc4 00:28:09.015 Malloc5 00:28:09.015 Malloc6 00:28:09.015 Malloc7 00:28:09.015 Malloc8 00:28:09.015 Malloc9 00:28:09.274 Malloc10 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=725452 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 725452 /var/tmp/bdevperf.sock 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 725452 ']' 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:09.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.274 { 00:28:09.274 "params": { 00:28:09.274 "name": "Nvme$subsystem", 00:28:09.274 "trtype": "$TEST_TRANSPORT", 00:28:09.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.274 "adrfam": "ipv4", 00:28:09.274 "trsvcid": "$NVMF_PORT", 00:28:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.274 "hdgst": ${hdgst:-false}, 00:28:09.274 "ddgst": ${ddgst:-false} 00:28:09.274 }, 00:28:09.274 "method": "bdev_nvme_attach_controller" 00:28:09.274 } 00:28:09.274 EOF 00:28:09.274 )") 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.274 { 00:28:09.274 "params": { 00:28:09.274 "name": "Nvme$subsystem", 00:28:09.274 "trtype": "$TEST_TRANSPORT", 00:28:09.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.274 "adrfam": "ipv4", 00:28:09.274 "trsvcid": "$NVMF_PORT", 00:28:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.274 "hdgst": ${hdgst:-false}, 00:28:09.274 "ddgst": ${ddgst:-false} 00:28:09.274 }, 00:28:09.274 "method": "bdev_nvme_attach_controller" 00:28:09.274 } 00:28:09.274 EOF 00:28:09.274 )") 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.274 { 00:28:09.274 "params": { 00:28:09.274 "name": "Nvme$subsystem", 00:28:09.274 "trtype": "$TEST_TRANSPORT", 00:28:09.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.274 "adrfam": "ipv4", 00:28:09.274 "trsvcid": "$NVMF_PORT", 00:28:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.274 "hdgst": ${hdgst:-false}, 00:28:09.274 "ddgst": ${ddgst:-false} 00:28:09.274 }, 00:28:09.274 "method": "bdev_nvme_attach_controller" 00:28:09.274 } 00:28:09.274 EOF 00:28:09.274 )") 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.274 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.274 { 00:28:09.274 "params": { 00:28:09.274 "name": "Nvme$subsystem", 00:28:09.274 "trtype": "$TEST_TRANSPORT", 00:28:09.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.274 "adrfam": "ipv4", 00:28:09.274 "trsvcid": "$NVMF_PORT", 00:28:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.274 "hdgst": ${hdgst:-false}, 00:28:09.274 "ddgst": ${ddgst:-false} 00:28:09.274 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.275 { 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme$subsystem", 00:28:09.275 "trtype": "$TEST_TRANSPORT", 00:28:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "$NVMF_PORT", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.275 "hdgst": ${hdgst:-false}, 00:28:09.275 "ddgst": ${ddgst:-false} 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 } 00:28:09.275 EOF 00:28:09.275 )") 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:09.275 06:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme1", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme2", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme3", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme4", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme5", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme6", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme7", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme8", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme9", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:09.275 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:09.275 "hdgst": false, 00:28:09.275 "ddgst": false 00:28:09.275 }, 00:28:09.275 "method": "bdev_nvme_attach_controller" 00:28:09.275 },{ 00:28:09.275 "params": { 00:28:09.275 "name": "Nvme10", 00:28:09.275 "trtype": "tcp", 00:28:09.275 "traddr": "10.0.0.2", 00:28:09.275 "adrfam": "ipv4", 00:28:09.275 "trsvcid": "4420", 00:28:09.276 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:09.276 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:09.276 "hdgst": false, 00:28:09.276 "ddgst": false 00:28:09.276 }, 00:28:09.276 "method": "bdev_nvme_attach_controller" 00:28:09.276 }' 00:28:09.276 [2024-07-15 06:56:56.740690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:09.276 [2024-07-15 06:56:56.740761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:09.276 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.276 [2024-07-15 06:56:56.804723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.534 [2024-07-15 06:56:56.891485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 725452 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:11.437 06:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:12.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 725452 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 725278 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.370 "adrfam": "ipv4", 00:28:12.370 "trsvcid": "$NVMF_PORT", 00:28:12.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.370 "hdgst": ${hdgst:-false}, 00:28:12.370 "ddgst": ${ddgst:-false} 00:28:12.370 }, 00:28:12.370 "method": "bdev_nvme_attach_controller" 00:28:12.370 } 00:28:12.370 EOF 00:28:12.370 )") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.370 "adrfam": "ipv4", 00:28:12.370 "trsvcid": "$NVMF_PORT", 00:28:12.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.370 "hdgst": ${hdgst:-false}, 00:28:12.370 "ddgst": ${ddgst:-false} 00:28:12.370 }, 00:28:12.370 "method": "bdev_nvme_attach_controller" 00:28:12.370 } 00:28:12.370 EOF 00:28:12.370 )") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.370 "adrfam": "ipv4", 00:28:12.370 "trsvcid": "$NVMF_PORT", 00:28:12.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.370 "hdgst": ${hdgst:-false}, 00:28:12.370 "ddgst": ${ddgst:-false} 00:28:12.370 }, 00:28:12.370 "method": "bdev_nvme_attach_controller" 00:28:12.370 } 00:28:12.370 EOF 00:28:12.370 )") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.370 "adrfam": "ipv4", 00:28:12.370 "trsvcid": "$NVMF_PORT", 00:28:12.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.370 "hdgst": ${hdgst:-false}, 00:28:12.370 "ddgst": ${ddgst:-false} 00:28:12.370 }, 00:28:12.370 "method": "bdev_nvme_attach_controller" 00:28:12.370 } 00:28:12.370 EOF 00:28:12.370 )") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.370 "adrfam": "ipv4", 00:28:12.370 "trsvcid": "$NVMF_PORT", 00:28:12.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.370 "hdgst": ${hdgst:-false}, 00:28:12.370 "ddgst": ${ddgst:-false} 00:28:12.370 }, 00:28:12.370 "method": "bdev_nvme_attach_controller" 00:28:12.370 } 00:28:12.370 EOF 00:28:12.370 )") 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.370 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.370 { 00:28:12.370 "params": { 00:28:12.370 "name": "Nvme$subsystem", 00:28:12.370 "trtype": "$TEST_TRANSPORT", 00:28:12.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "$NVMF_PORT", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.371 "hdgst": ${hdgst:-false}, 00:28:12.371 "ddgst": ${ddgst:-false} 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 } 00:28:12.371 EOF 00:28:12.371 )") 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.371 { 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme$subsystem", 00:28:12.371 "trtype": "$TEST_TRANSPORT", 00:28:12.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "$NVMF_PORT", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.371 "hdgst": ${hdgst:-false}, 00:28:12.371 "ddgst": ${ddgst:-false} 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 } 00:28:12.371 EOF 00:28:12.371 )") 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.371 { 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme$subsystem", 00:28:12.371 "trtype": "$TEST_TRANSPORT", 00:28:12.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "$NVMF_PORT", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.371 "hdgst": ${hdgst:-false}, 00:28:12.371 "ddgst": ${ddgst:-false} 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 } 00:28:12.371 EOF 00:28:12.371 )") 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.371 { 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme$subsystem", 00:28:12.371 "trtype": "$TEST_TRANSPORT", 00:28:12.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "$NVMF_PORT", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.371 "hdgst": ${hdgst:-false}, 00:28:12.371 "ddgst": ${ddgst:-false} 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 } 00:28:12.371 EOF 00:28:12.371 )") 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.371 { 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme$subsystem", 00:28:12.371 "trtype": "$TEST_TRANSPORT", 00:28:12.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "$NVMF_PORT", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.371 "hdgst": ${hdgst:-false}, 00:28:12.371 "ddgst": ${ddgst:-false} 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 } 00:28:12.371 EOF 00:28:12.371 )") 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:12.371 06:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme1", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme2", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme3", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme4", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme5", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme6", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme7", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme8", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme9", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 },{ 00:28:12.371 "params": { 00:28:12.371 "name": "Nvme10", 00:28:12.371 "trtype": "tcp", 00:28:12.371 "traddr": "10.0.0.2", 00:28:12.371 "adrfam": "ipv4", 00:28:12.371 "trsvcid": "4420", 00:28:12.371 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.371 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.371 "hdgst": false, 00:28:12.371 "ddgst": false 00:28:12.371 }, 00:28:12.371 "method": "bdev_nvme_attach_controller" 00:28:12.371 }' 00:28:12.371 [2024-07-15 06:56:59.747980] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:12.371 [2024-07-15 06:56:59.748060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725873 ] 00:28:12.371 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.371 [2024-07-15 06:56:59.813100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.371 [2024-07-15 06:56:59.900052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.744 Running I/O for 1 seconds... 00:28:15.117 00:28:15.117 Latency(us) 00:28:15.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme1n1 : 1.19 214.93 13.43 0.00 0.00 292643.46 20388.98 264085.81 00:28:15.117 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme2n1 : 1.18 219.52 13.72 0.00 0.00 283434.89 5000.15 259425.47 00:28:15.117 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme3n1 : 1.17 230.82 14.43 0.00 0.00 254263.80 26796.94 259425.47 00:28:15.117 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme4n1 : 1.16 219.98 13.75 0.00 0.00 274340.98 17185.00 274959.93 00:28:15.117 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme5n1 : 1.20 213.51 13.34 0.00 0.00 278672.50 24855.13 256318.58 00:28:15.117 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme6n1 : 1.21 265.34 16.58 0.00 0.00 220699.15 18252.99 248551.35 00:28:15.117 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme7n1 : 1.20 213.10 13.32 0.00 0.00 270069.00 20194.80 274959.93 00:28:15.117 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme8n1 : 1.21 264.64 16.54 0.00 0.00 214189.85 15534.46 254765.13 00:28:15.117 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme9n1 : 1.21 263.60 16.47 0.00 0.00 210009.35 13107.20 254765.13 00:28:15.117 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.117 Verification LBA range: start 0x0 length 0x400 00:28:15.117 Nvme10n1 : 1.19 214.34 13.40 0.00 0.00 255212.28 21068.61 279620.27 00:28:15.117 =================================================================================================================== 00:28:15.117 Total : 2319.78 144.99 0.00 0.00 252588.55 5000.15 279620.27 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.117 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.117 rmmod nvme_tcp 00:28:15.117 rmmod nvme_fabrics 00:28:15.117 rmmod nvme_keyring 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 725278 ']' 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 725278 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 725278 ']' 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 725278 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 725278 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 725278' 00:28:15.377 killing process with pid 725278 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 725278 00:28:15.377 06:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 725278 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.946 06:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.847 00:28:17.847 real 0m11.657s 00:28:17.847 user 0m33.640s 00:28:17.847 sys 0m3.138s 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.847 ************************************ 00:28:17.847 END TEST nvmf_shutdown_tc1 00:28:17.847 ************************************ 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.847 ************************************ 00:28:17.847 START TEST nvmf_shutdown_tc2 00:28:17.847 ************************************ 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.847 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.848 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:18.106 00:28:18.106 --- 10.0.0.2 ping statistics --- 00:28:18.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.106 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:18.106 00:28:18.106 --- 10.0.0.1 ping statistics --- 00:28:18.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.106 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=726747 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 726747 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 726747 ']' 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.106 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:18.107 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.107 [2024-07-15 06:57:05.588039] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:18.107 [2024-07-15 06:57:05.588111] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.107 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.107 [2024-07-15 06:57:05.658952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.364 [2024-07-15 06:57:05.752184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.365 [2024-07-15 06:57:05.752243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.365 [2024-07-15 06:57:05.752258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.365 [2024-07-15 06:57:05.752272] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.365 [2024-07-15 06:57:05.752284] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.365 [2024-07-15 06:57:05.752380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.365 [2024-07-15 06:57:05.752480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.365 [2024-07-15 06:57:05.752532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.365 [2024-07-15 06:57:05.752529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.365 [2024-07-15 06:57:05.888456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.365 06:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.365 Malloc1 00:28:18.365 [2024-07-15 06:57:05.959293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.622 Malloc2 00:28:18.622 Malloc3 00:28:18.622 Malloc4 00:28:18.623 Malloc5 00:28:18.623 Malloc6 00:28:18.623 Malloc7 00:28:18.881 Malloc8 00:28:18.881 Malloc9 00:28:18.881 Malloc10 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=726922 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 726922 /var/tmp/bdevperf.sock 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 726922 ']' 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.881 { 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme$subsystem", 00:28:18.881 "trtype": "$TEST_TRANSPORT", 00:28:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "$NVMF_PORT", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.881 "hdgst": ${hdgst:-false}, 00:28:18.881 "ddgst": ${ddgst:-false} 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 } 00:28:18.881 EOF 00:28:18.881 )") 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:18.881 06:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme1", 00:28:18.881 "trtype": "tcp", 00:28:18.881 "traddr": "10.0.0.2", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "4420", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.881 "hdgst": false, 00:28:18.881 "ddgst": false 00:28:18.881 }, 00:28:18.881 "method": "bdev_nvme_attach_controller" 00:28:18.881 },{ 00:28:18.881 "params": { 00:28:18.881 "name": "Nvme2", 00:28:18.881 "trtype": "tcp", 00:28:18.881 "traddr": "10.0.0.2", 00:28:18.881 "adrfam": "ipv4", 00:28:18.881 "trsvcid": "4420", 00:28:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.881 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme3", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme4", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme5", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme6", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme7", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme8", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme9", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 },{ 00:28:18.882 "params": { 00:28:18.882 "name": "Nvme10", 00:28:18.882 "trtype": "tcp", 00:28:18.882 "traddr": "10.0.0.2", 00:28:18.882 "adrfam": "ipv4", 00:28:18.882 "trsvcid": "4420", 00:28:18.882 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.882 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.882 "hdgst": false, 00:28:18.882 "ddgst": false 00:28:18.882 }, 00:28:18.882 "method": "bdev_nvme_attach_controller" 00:28:18.882 }' 00:28:18.882 [2024-07-15 06:57:06.468996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:18.882 [2024-07-15 06:57:06.469074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726922 ] 00:28:19.140 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.140 [2024-07-15 06:57:06.533916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.140 [2024-07-15 06:57:06.618521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.511 Running I/O for 10 seconds... 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:21.077 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 726922 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 726922 ']' 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 726922 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 726922 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 726922' 00:28:21.378 killing process with pid 726922 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 726922 00:28:21.378 06:57:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 726922 00:28:21.378 Received shutdown signal, test time was about 0.862336 seconds 00:28:21.378 00:28:21.378 Latency(us) 00:28:21.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme1n1 : 0.84 227.98 14.25 0.00 0.00 277041.30 33204.91 240784.12 00:28:21.378 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme2n1 : 0.83 230.51 14.41 0.00 0.00 266565.66 28156.21 242337.56 00:28:21.378 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme3n1 : 0.82 234.99 14.69 0.00 0.00 256266.49 31651.46 246997.90 00:28:21.378 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme4n1 : 0.82 241.43 15.09 0.00 0.00 240792.53 6310.87 259425.47 00:28:21.378 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme5n1 : 0.86 223.09 13.94 0.00 0.00 258304.13 22816.24 276513.37 00:28:21.378 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme6n1 : 0.85 224.91 14.06 0.00 0.00 250015.79 18738.44 257872.02 00:28:21.378 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme7n1 : 0.84 234.23 14.64 0.00 0.00 232526.61 4271.98 243891.01 00:28:21.378 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme8n1 : 0.85 225.57 14.10 0.00 0.00 237320.72 19709.35 268746.15 00:28:21.378 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme9n1 : 0.85 226.43 14.15 0.00 0.00 230289.26 22136.60 260978.92 00:28:21.378 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.378 Verification LBA range: start 0x0 length 0x400 00:28:21.378 Nvme10n1 : 0.86 215.89 13.49 0.00 0.00 234521.27 20680.25 296708.17 00:28:21.378 =================================================================================================================== 00:28:21.378 Total : 2285.04 142.81 0.00 0.00 248347.08 4271.98 296708.17 00:28:21.635 06:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 726747 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.568 rmmod nvme_tcp 00:28:22.568 rmmod nvme_fabrics 00:28:22.568 rmmod nvme_keyring 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 726747 ']' 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 726747 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 726747 ']' 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 726747 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:22.568 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 726747 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 726747' 00:28:22.848 killing process with pid 726747 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 726747 00:28:22.848 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 726747 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.106 06:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.645 00:28:25.645 real 0m7.375s 00:28:25.645 user 0m21.928s 00:28:25.645 sys 0m1.385s 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 ************************************ 00:28:25.645 END TEST nvmf_shutdown_tc2 00:28:25.645 ************************************ 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 ************************************ 00:28:25.645 START TEST nvmf_shutdown_tc3 00:28:25.645 ************************************ 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.645 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:28:25.646 00:28:25.646 --- 10.0.0.2 ping statistics --- 00:28:25.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.646 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:25.646 00:28:25.646 --- 10.0.0.1 ping statistics --- 00:28:25.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.646 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=728342 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 728342 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 728342 ']' 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:25.646 06:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 [2024-07-15 06:57:13.050067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:25.646 [2024-07-15 06:57:13.050172] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.646 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.646 [2024-07-15 06:57:13.114151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.646 [2024-07-15 06:57:13.199251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.647 [2024-07-15 06:57:13.199302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.647 [2024-07-15 06:57:13.199314] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.647 [2024-07-15 06:57:13.199325] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.647 [2024-07-15 06:57:13.199334] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.647 [2024-07-15 06:57:13.199417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.647 [2024-07-15 06:57:13.199480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.647 [2024-07-15 06:57:13.199549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.647 [2024-07-15 06:57:13.199551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.904 [2024-07-15 06:57:13.346448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.904 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.904 Malloc1 00:28:25.904 [2024-07-15 06:57:13.421365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.904 Malloc2 00:28:25.904 Malloc3 00:28:26.160 Malloc4 00:28:26.160 Malloc5 00:28:26.161 Malloc6 00:28:26.161 Malloc7 00:28:26.161 Malloc8 00:28:26.419 Malloc9 00:28:26.419 Malloc10 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=728402 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 728402 /var/tmp/bdevperf.sock 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 728402 ']' 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.419 { 00:28:26.419 "params": { 00:28:26.419 "name": "Nvme$subsystem", 00:28:26.419 "trtype": "$TEST_TRANSPORT", 00:28:26.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.419 "adrfam": "ipv4", 00:28:26.419 "trsvcid": "$NVMF_PORT", 00:28:26.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.419 "hdgst": ${hdgst:-false}, 00:28:26.419 "ddgst": ${ddgst:-false} 00:28:26.419 }, 00:28:26.419 "method": "bdev_nvme_attach_controller" 00:28:26.419 } 00:28:26.419 EOF 00:28:26.419 )") 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.419 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.419 { 00:28:26.419 "params": { 00:28:26.419 "name": "Nvme$subsystem", 00:28:26.419 "trtype": "$TEST_TRANSPORT", 00:28:26.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.419 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.420 { 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme$subsystem", 00:28:26.420 "trtype": "$TEST_TRANSPORT", 00:28:26.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "$NVMF_PORT", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.420 "hdgst": ${hdgst:-false}, 00:28:26.420 "ddgst": ${ddgst:-false} 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 } 00:28:26.420 EOF 00:28:26.420 )") 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:26.420 06:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme1", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.420 "hdgst": false, 00:28:26.420 "ddgst": false 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 },{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme2", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:26.420 "hdgst": false, 00:28:26.420 "ddgst": false 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 },{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme3", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:26.420 "hdgst": false, 00:28:26.420 "ddgst": false 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 },{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme4", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:26.420 "hdgst": false, 00:28:26.420 "ddgst": false 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 },{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme5", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:26.420 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:26.420 "hdgst": false, 00:28:26.420 "ddgst": false 00:28:26.420 }, 00:28:26.420 "method": "bdev_nvme_attach_controller" 00:28:26.420 },{ 00:28:26.420 "params": { 00:28:26.420 "name": "Nvme6", 00:28:26.420 "trtype": "tcp", 00:28:26.420 "traddr": "10.0.0.2", 00:28:26.420 "adrfam": "ipv4", 00:28:26.420 "trsvcid": "4420", 00:28:26.420 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:26.421 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:26.421 "hdgst": false, 00:28:26.421 "ddgst": false 00:28:26.421 }, 00:28:26.421 "method": "bdev_nvme_attach_controller" 00:28:26.421 },{ 00:28:26.421 "params": { 00:28:26.421 "name": "Nvme7", 00:28:26.421 "trtype": "tcp", 00:28:26.421 "traddr": "10.0.0.2", 00:28:26.421 "adrfam": "ipv4", 00:28:26.421 "trsvcid": "4420", 00:28:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:26.421 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:26.421 "hdgst": false, 00:28:26.421 "ddgst": false 00:28:26.421 }, 00:28:26.421 "method": "bdev_nvme_attach_controller" 00:28:26.421 },{ 00:28:26.421 "params": { 00:28:26.421 "name": "Nvme8", 00:28:26.421 "trtype": "tcp", 00:28:26.421 "traddr": "10.0.0.2", 00:28:26.421 "adrfam": "ipv4", 00:28:26.421 "trsvcid": "4420", 00:28:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:26.421 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:26.421 "hdgst": false, 00:28:26.421 "ddgst": false 00:28:26.421 }, 00:28:26.421 "method": "bdev_nvme_attach_controller" 00:28:26.421 },{ 00:28:26.421 "params": { 00:28:26.421 "name": "Nvme9", 00:28:26.421 "trtype": "tcp", 00:28:26.421 "traddr": "10.0.0.2", 00:28:26.421 "adrfam": "ipv4", 00:28:26.421 "trsvcid": "4420", 00:28:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:26.421 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:26.421 "hdgst": false, 00:28:26.421 "ddgst": false 00:28:26.421 }, 00:28:26.421 "method": "bdev_nvme_attach_controller" 00:28:26.421 },{ 00:28:26.421 "params": { 00:28:26.421 "name": "Nvme10", 00:28:26.421 "trtype": "tcp", 00:28:26.421 "traddr": "10.0.0.2", 00:28:26.421 "adrfam": "ipv4", 00:28:26.421 "trsvcid": "4420", 00:28:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:26.421 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:26.421 "hdgst": false, 00:28:26.421 "ddgst": false 00:28:26.421 }, 00:28:26.421 "method": "bdev_nvme_attach_controller" 00:28:26.421 }' 00:28:26.421 [2024-07-15 06:57:13.910564] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:26.421 [2024-07-15 06:57:13.910656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728402 ] 00:28:26.421 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.421 [2024-07-15 06:57:13.975370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.680 [2024-07-15 06:57:14.063966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.581 Running I/O for 10 seconds... 00:28:28.581 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:28.581 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:28.581 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:28.581 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.581 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:28.841 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:29.102 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 728342 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 728342 ']' 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 728342 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 728342 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 728342' 00:28:29.376 killing process with pid 728342 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 728342 00:28:29.376 06:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 728342 00:28:29.376 [2024-07-15 06:57:16.863045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.376 [2024-07-15 06:57:16.863597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.863928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de560 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.377 [2024-07-15 06:57:16.866849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.866992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.867176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dea00 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.868996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.869008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.869020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.869031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24deea0 is same with the state(5) to be set 00:28:29.378 [2024-07-15 06:57:16.870267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.378 [2024-07-15 06:57:16.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.378 [2024-07-15 06:57:16.870785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.870974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.870987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 06:57:16.871830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with [2024-07-15 06:57:16.871901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.379 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.379 [2024-07-15 06:57:16.871925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.379 [2024-07-15 06:57:16.871930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.379 [2024-07-15 06:57:16.871938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.871945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.871950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.871961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12[2024-07-15 06:57:16.871963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.871976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 06:57:16.871977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.871993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.871995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:12[2024-07-15 06:57:16.872057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with [2024-07-15 06:57:16.872072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.380 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 06:57:16.872284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.380 [2024-07-15 06:57:16.872313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.872326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:29.380 [2024-07-15 06:57:16.872373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df360 is same with the state(5) to be set 00:28:29.380 [2024-07-15 06:57:16.872971] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2dff3f0 was disconnected and freed. reset controller. 00:28:29.380 [2024-07-15 06:57:16.873103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.380 [2024-07-15 06:57:16.873127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.380 [2024-07-15 06:57:16.873153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015f90 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb190 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 06:57:16.873479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febf90 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 06:57:16.873747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with [2024-07-15 06:57:16.873762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:28:29.381 id:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with [2024-07-15 06:57:16.873778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:29.381 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with [2024-07-15 06:57:16.873792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8810 is same the state(5) to be set 00:28:29.381 with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 06:57:16.873842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:29.381 the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.381 [2024-07-15 06:57:16.873920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.381 [2024-07-15 06:57:16.873936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with [2024-07-15 06:57:16.873936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:28:29.381 id:0 cdw10:00000000 cdw11:00000000 00:28:29.381 [2024-07-15 06:57:16.873951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with [2024-07-15 06:57:16.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:29.381 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.382 [2024-07-15 06:57:16.873969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.873972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.382 [2024-07-15 06:57:16.873983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.873986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.382 [2024-07-15 06:57:16.873995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.873999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd300 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.874356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24df800 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.875990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.382 [2024-07-15 06:57:16.876588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.383 [2024-07-15 06:57:16.876600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.383 [2024-07-15 06:57:16.876612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfca0 is same with the state(5) to be set 00:28:29.383 [2024-07-15 06:57:16.877167] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:29.383 [2024-07-15 06:57:16.877208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015f90 (9): Bad file descriptor 00:28:29.383 [2024-07-15 06:57:16.877412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.877973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.877993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.383 [2024-07-15 06:57:16.878660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.383 [2024-07-15 06:57:16.878674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.878976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.878992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.384 [2024-07-15 06:57:16.879419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.384 [2024-07-15 06:57:16.879537] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2113cd0 was disconnected and freed. reset controller. 00:28:29.384 [2024-07-15 06:57:16.882581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.882834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:29.384 [2024-07-15 06:57:16.883134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe8810 (9): Bad file descriptor 00:28:29.384 [2024-07-15 06:57:16.883179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.384 [2024-07-15 06:57:16.883284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.385 [2024-07-15 06:57:16.883392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2015f90 with addr=10.0.0.2, port=4420 00:28:29.385 [2024-07-15 06:57:16.883418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015f90 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 06:57:16.883493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:29.385 the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 06:57:16.883538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:29.385 the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 06:57:16.883591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0140 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:29.385 the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.883639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab5610 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb190 (9): Bad file descriptor 00:28:29.385 [2024-07-15 06:57:16.883742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febf90 (9): Bad file descriptor 00:28:29.385 [2024-07-15 06:57:16.883793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.883815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.883845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.883873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.883925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.883938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fec0 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.883989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.884010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.884025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.884054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.884067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.884081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.385 [2024-07-15 06:57:16.884095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.884108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe06b0 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.884146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd300 (9): Bad file descriptor 00:28:29.385 [2024-07-15 06:57:16.885032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(5) to be set 00:28:29.385 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.385 [2024-07-15 06:57:16.885251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.885265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.385 [2024-07-15 06:57:16.885272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.385 [2024-07-15 06:57:16.885280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-07-15 06:57:16.885299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.386 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-07-15 06:57:16.885363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.386 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(5) to be set 00:28:29.386 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-15 06:57:16.885504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 06:57:16.885632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1the state(5) to be set 00:28:29.386 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.386 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-07-15 06:57:16.885824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 06:57:16.885838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with [2024-07-15 06:57:16.885929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:29.386 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.386 [2024-07-15 06:57:16.885942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.386 [2024-07-15 06:57:16.885955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.386 [2024-07-15 06:57:16.885959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.885967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.885975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.885980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.885990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.885993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e0aa0 is same with the state(5) to be set 00:28:29.387 [2024-07-15 06:57:16.886125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.886934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.886985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.387 [2024-07-15 06:57:16.911802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.387 [2024-07-15 06:57:16.911926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:29.388 [2024-07-15 06:57:16.912026] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb8010 was disconnected and freed. reset controller. 00:28:29.388 [2024-07-15 06:57:16.912288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015f90 (9): Bad file descriptor 00:28:29.388 [2024-07-15 06:57:16.912360] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.388 [2024-07-15 06:57:16.912392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab5610 (9): Bad file descriptor 00:28:29.388 [2024-07-15 06:57:16.912460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f50 is same with the state(5) to be set 00:28:29.388 [2024-07-15 06:57:16.912628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.388 [2024-07-15 06:57:16.912730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.912743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2161f00 is same with the state(5) to be set 00:28:29.388 [2024-07-15 06:57:16.912779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217fec0 (9): Bad file descriptor 00:28:29.388 [2024-07-15 06:57:16.912812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe06b0 (9): Bad file descriptor 00:28:29.388 [2024-07-15 06:57:16.912911] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.388 [2024-07-15 06:57:16.912995] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.388 [2024-07-15 06:57:16.913351] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.388 [2024-07-15 06:57:16.914808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:29.388 [2024-07-15 06:57:16.914842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2161f00 (9): Bad file descriptor 00:28:29.388 [2024-07-15 06:57:16.915054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.388 [2024-07-15 06:57:16.915082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe8810 with addr=10.0.0.2, port=4420 00:28:29.388 [2024-07-15 06:57:16.915104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8810 is same with the state(5) to be set 00:28:29.388 [2024-07-15 06:57:16.915121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:29.388 [2024-07-15 06:57:16.915133] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:29.388 [2024-07-15 06:57:16.915149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:29.388 [2024-07-15 06:57:16.915219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.915975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.916004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.388 [2024-07-15 06:57:16.916018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.388 [2024-07-15 06:57:16.916034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.916982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.916997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.917011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.917040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.917055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.917069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.917089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.389 [2024-07-15 06:57:16.917118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.389 [2024-07-15 06:57:16.917132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.917147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.917161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.917175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094ef0 is same with the state(5) to be set 00:28:29.390 [2024-07-15 06:57:16.918423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.918970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.918990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.390 [2024-07-15 06:57:16.919660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.390 [2024-07-15 06:57:16.919674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.919985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.919999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.920392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.920407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20961d0 is same with the state(5) to be set 00:28:29.391 [2024-07-15 06:57:16.921685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.921981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.921996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.391 [2024-07-15 06:57:16.922276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.391 [2024-07-15 06:57:16.922292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.922938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.922951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.930970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.930984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.392 [2024-07-15 06:57:16.931309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.392 [2024-07-15 06:57:16.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.931338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.931351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.931367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.931380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.931396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21151f0 is same with the state(5) to be set 00:28:29.393 [2024-07-15 06:57:16.932787] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.393 [2024-07-15 06:57:16.932888] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.393 [2024-07-15 06:57:16.933102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.393 [2024-07-15 06:57:16.933131] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.393 [2024-07-15 06:57:16.933166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:29.393 [2024-07-15 06:57:16.933186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:29.393 [2024-07-15 06:57:16.933254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe8810 (9): Bad file descriptor 00:28:29.393 [2024-07-15 06:57:16.933345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f50 (9): Bad file descriptor 00:28:29.393 [2024-07-15 06:57:16.933404] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.393 [2024-07-15 06:57:16.934101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.393 [2024-07-15 06:57:16.934134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2161f00 with addr=10.0.0.2, port=4420 00:28:29.393 [2024-07-15 06:57:16.934152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2161f00 is same with the state(5) to be set 00:28:29.393 [2024-07-15 06:57:16.934278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.393 [2024-07-15 06:57:16.934308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd300 with addr=10.0.0.2, port=4420 00:28:29.393 [2024-07-15 06:57:16.934325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd300 is same with the state(5) to be set 00:28:29.393 [2024-07-15 06:57:16.934517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.393 [2024-07-15 06:57:16.934541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbb190 with addr=10.0.0.2, port=4420 00:28:29.393 [2024-07-15 06:57:16.934556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb190 is same with the state(5) to be set 00:28:29.393 [2024-07-15 06:57:16.934687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.393 [2024-07-15 06:57:16.934711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1febf90 with addr=10.0.0.2, port=4420 00:28:29.393 [2024-07-15 06:57:16.934726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febf90 is same with the state(5) to be set 00:28:29.393 [2024-07-15 06:57:16.934741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:29.393 [2024-07-15 06:57:16.934754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:29.393 [2024-07-15 06:57:16.934769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:29.393 [2024-07-15 06:57:16.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.935985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.935999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.393 [2024-07-15 06:57:16.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.393 [2024-07-15 06:57:16.936303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.936851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.936866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.394 [2024-07-15 06:57:16.937764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.394 [2024-07-15 06:57:16.937779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21166f0 is same with the state(5) to be set 00:28:29.395 [2024-07-15 06:57:16.939041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.939972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.939988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.395 [2024-07-15 06:57:16.940326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.395 [2024-07-15 06:57:16.940340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.940976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.940991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.941005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.941035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.941049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5630 is same with the state(5) to be set 00:28:29.396 [2024-07-15 06:57:16.942342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.396 [2024-07-15 06:57:16.942953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.396 [2024-07-15 06:57:16.942967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.942983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.942997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.943985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.944000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.944016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.397 [2024-07-15 06:57:16.944033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.397 [2024-07-15 06:57:16.944049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.944286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.945618] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:29.398 [2024-07-15 06:57:16.945665] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:29.398 [2024-07-15 06:57:16.945690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.398 [2024-07-15 06:57:16.945706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:29.398 [2024-07-15 06:57:16.945722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:29.398 [2024-07-15 06:57:16.945742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:29.398 [2024-07-15 06:57:16.945811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2161f00 (9): Bad file descriptor 00:28:29.398 [2024-07-15 06:57:16.945836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd300 (9): Bad file descriptor 00:28:29.398 [2024-07-15 06:57:16.945861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb190 (9): Bad file descriptor 00:28:29.398 [2024-07-15 06:57:16.945888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febf90 (9): Bad file descriptor 00:28:29.398 [2024-07-15 06:57:16.945967] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.398 [2024-07-15 06:57:16.945993] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.398 [2024-07-15 06:57:16.946014] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.398 [2024-07-15 06:57:16.946035] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.398 [2024-07-15 06:57:16.946354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.398 [2024-07-15 06:57:16.946383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2015f90 with addr=10.0.0.2, port=4420 00:28:29.398 [2024-07-15 06:57:16.946400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015f90 is same with the state(5) to be set 00:28:29.398 [2024-07-15 06:57:16.946516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.398 [2024-07-15 06:57:16.946541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe06b0 with addr=10.0.0.2, port=4420 00:28:29.398 [2024-07-15 06:57:16.946557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe06b0 is same with the state(5) to be set 00:28:29.398 [2024-07-15 06:57:16.946673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.398 [2024-07-15 06:57:16.946698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217fec0 with addr=10.0.0.2, port=4420 00:28:29.398 [2024-07-15 06:57:16.946714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fec0 is same with the state(5) to be set 00:28:29.398 [2024-07-15 06:57:16.946830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.398 [2024-07-15 06:57:16.946854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab5610 with addr=10.0.0.2, port=4420 00:28:29.398 [2024-07-15 06:57:16.946869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab5610 is same with the state(5) to be set 00:28:29.398 [2024-07-15 06:57:16.946895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:29.398 [2024-07-15 06:57:16.946910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:29.398 [2024-07-15 06:57:16.946926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:29.398 [2024-07-15 06:57:16.946946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.398 [2024-07-15 06:57:16.946961] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.398 [2024-07-15 06:57:16.946975] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.398 [2024-07-15 06:57:16.946991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:29.398 [2024-07-15 06:57:16.947005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:29.398 [2024-07-15 06:57:16.947018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:29.398 [2024-07-15 06:57:16.947034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:29.398 [2024-07-15 06:57:16.947048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:29.398 [2024-07-15 06:57:16.947065] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:29.398 [2024-07-15 06:57:16.947897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.947921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.947943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.947959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.947975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.947989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.398 [2024-07-15 06:57:16.948415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.398 [2024-07-15 06:57:16.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.948984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.399 [2024-07-15 06:57:16.949705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.399 [2024-07-15 06:57:16.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.400 [2024-07-15 06:57:16.949734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-15 06:57:16.949748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.400 [2024-07-15 06:57:16.949764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-15 06:57:16.949777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.400 [2024-07-15 06:57:16.949793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-15 06:57:16.949811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.400 [2024-07-15 06:57:16.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-15 06:57:16.949842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.400 [2024-07-15 06:57:16.949857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9440 is same with the state(5) to be set 00:28:29.400 [2024-07-15 06:57:16.951495] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:29.400 [2024-07-15 06:57:16.951525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.400 [2024-07-15 06:57:16.951541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.400 [2024-07-15 06:57:16.951552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.400 [2024-07-15 06:57:16.951564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.400 task offset: 19840 on job bdev=Nvme10n1 fails 00:28:29.400 00:28:29.400 Latency(us) 00:28:29.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.400 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme1n1 ended in about 0.91 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme1n1 : 0.91 141.05 8.82 70.53 0.00 299154.90 22136.60 251658.24 00:28:29.400 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme2n1 ended in about 0.91 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme2n1 : 0.91 140.55 8.78 70.28 0.00 294170.17 27573.67 250104.79 00:28:29.400 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme3n1 ended in about 0.87 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme3n1 : 0.87 220.42 13.78 73.47 0.00 206057.91 10728.49 254765.13 00:28:29.400 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme4n1 ended in about 0.92 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme4n1 : 0.92 208.32 13.02 69.44 0.00 214144.00 15825.73 242337.56 00:28:29.400 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme5n1 ended in about 0.93 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme5n1 : 0.93 137.93 8.62 68.96 0.00 281646.52 38447.79 233016.89 00:28:29.400 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme6n1 ended in about 0.93 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme6n1 : 0.93 137.44 8.59 68.72 0.00 276810.97 21456.97 250104.79 00:28:29.400 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme7n1 : 0.93 136.97 8.56 68.48 0.00 271938.24 22233.69 302921.96 00:28:29.400 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme8n1 ended in about 0.90 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme8n1 : 0.90 212.42 13.28 70.81 0.00 191818.90 24758.04 251658.24 00:28:29.400 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme9n1 ended in about 0.94 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme9n1 : 0.94 136.16 8.51 68.08 0.00 262031.74 22524.97 279620.27 00:28:29.400 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.400 Job: Nvme10n1 ended in about 0.87 seconds with error 00:28:29.400 Verification LBA range: start 0x0 length 0x400 00:28:29.400 Nvme10n1 : 0.87 147.74 9.23 73.87 0.00 231677.16 6116.69 292047.83 00:28:29.400 =================================================================================================================== 00:28:29.400 Total : 1619.00 101.19 702.64 0.00 248496.13 6116.69 302921.96 00:28:29.659 [2024-07-15 06:57:16.976857] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:29.659 [2024-07-15 06:57:16.976943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:29.659 [2024-07-15 06:57:16.977029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015f90 (9): Bad file descriptor 00:28:29.659 [2024-07-15 06:57:16.977059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe06b0 (9): Bad file descriptor 00:28:29.659 [2024-07-15 06:57:16.977078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217fec0 (9): Bad file descriptor 00:28:29.659 [2024-07-15 06:57:16.977097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab5610 (9): Bad file descriptor 00:28:29.659 [2024-07-15 06:57:16.977489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-07-15 06:57:16.977528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe8810 with addr=10.0.0.2, port=4420 00:28:29.659 [2024-07-15 06:57:16.977548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8810 is same with the state(5) to be set 00:28:29.659 [2024-07-15 06:57:16.977669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-07-15 06:57:16.977696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160f50 with addr=10.0.0.2, port=4420 00:28:29.659 [2024-07-15 06:57:16.977712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f50 is same with the state(5) to be set 00:28:29.659 [2024-07-15 06:57:16.977728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:29.659 [2024-07-15 06:57:16.977742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:29.659 [2024-07-15 06:57:16.977758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:29.659 [2024-07-15 06:57:16.977778] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:29.659 [2024-07-15 06:57:16.977793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:29.659 [2024-07-15 06:57:16.977807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:29.659 [2024-07-15 06:57:16.977824] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:29.659 [2024-07-15 06:57:16.977838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:29.659 [2024-07-15 06:57:16.977851] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:29.659 [2024-07-15 06:57:16.977868] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:29.659 [2024-07-15 06:57:16.977893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:29.659 [2024-07-15 06:57:16.977907] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:29.659 [2024-07-15 06:57:16.977972] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.659 [2024-07-15 06:57:16.978008] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.659 [2024-07-15 06:57:16.978027] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.659 [2024-07-15 06:57:16.978045] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:29.659 [2024-07-15 06:57:16.978386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.659 [2024-07-15 06:57:16.978411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.659 [2024-07-15 06:57:16.978424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.659 [2024-07-15 06:57:16.978436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.659 [2024-07-15 06:57:16.978462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe8810 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.978483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f50 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.978543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:29.660 [2024-07-15 06:57:16.978567] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:29.660 [2024-07-15 06:57:16.978584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.660 [2024-07-15 06:57:16.978620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.978636] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.978650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:29.660 [2024-07-15 06:57:16.978667] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.978681] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.978694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:29.660 [2024-07-15 06:57:16.978732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:29.660 [2024-07-15 06:57:16.978762] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.660 [2024-07-15 06:57:16.978780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.660 [2024-07-15 06:57:16.978911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-07-15 06:57:16.978939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1febf90 with addr=10.0.0.2, port=4420 00:28:29.660 [2024-07-15 06:57:16.978956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febf90 is same with the state(5) to be set 00:28:29.660 [2024-07-15 06:57:16.979070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-07-15 06:57:16.979095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbb190 with addr=10.0.0.2, port=4420 00:28:29.660 [2024-07-15 06:57:16.979111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb190 is same with the state(5) to be set 00:28:29.660 [2024-07-15 06:57:16.979222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-07-15 06:57:16.979248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd300 with addr=10.0.0.2, port=4420 00:28:29.660 [2024-07-15 06:57:16.979264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd300 is same with the state(5) to be set 00:28:29.660 [2024-07-15 06:57:16.979393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-07-15 06:57:16.979425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2161f00 with addr=10.0.0.2, port=4420 00:28:29.660 [2024-07-15 06:57:16.979442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2161f00 is same with the state(5) to be set 00:28:29.660 [2024-07-15 06:57:16.979461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febf90 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.979480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb190 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.979497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd300 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.979544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2161f00 (9): Bad file descriptor 00:28:29.660 [2024-07-15 06:57:16.979567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.979581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.979595] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:29.660 [2024-07-15 06:57:16.979612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.979626] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.979639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:29.660 [2024-07-15 06:57:16.979654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.979667] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.979680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.660 [2024-07-15 06:57:16.979718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.660 [2024-07-15 06:57:16.979736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.660 [2024-07-15 06:57:16.979748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.660 [2024-07-15 06:57:16.979760] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:29.660 [2024-07-15 06:57:16.979772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:29.660 [2024-07-15 06:57:16.979786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:29.660 [2024-07-15 06:57:16.979824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.918 06:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:29.918 06:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 728402 00:28:30.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (728402) - No such process 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.855 rmmod nvme_tcp 00:28:30.855 rmmod nvme_fabrics 00:28:30.855 rmmod nvme_keyring 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.855 06:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.414 00:28:33.414 real 0m7.686s 00:28:33.414 user 0m19.206s 00:28:33.414 sys 0m1.520s 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.414 ************************************ 00:28:33.414 END TEST nvmf_shutdown_tc3 00:28:33.414 ************************************ 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:33.414 00:28:33.414 real 0m26.942s 00:28:33.414 user 1m14.868s 00:28:33.414 sys 0m6.184s 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:33.414 06:57:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.414 ************************************ 00:28:33.414 END TEST nvmf_shutdown 00:28:33.414 ************************************ 00:28:33.414 06:57:20 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.414 06:57:20 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.414 06:57:20 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:33.414 06:57:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:33.414 06:57:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.414 ************************************ 00:28:33.414 START TEST nvmf_multicontroller 00:28:33.414 ************************************ 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.414 * Looking for test storage... 00:28:33.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:33.414 06:57:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.415 06:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.313 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:35.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:28:35.314 00:28:35.314 --- 10.0.0.2 ping statistics --- 00:28:35.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.314 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:35.314 00:28:35.314 --- 10.0.0.1 ping statistics --- 00:28:35.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.314 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=730923 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 730923 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 730923 ']' 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:35.314 06:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.314 [2024-07-15 06:57:22.841016] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:35.314 [2024-07-15 06:57:22.841107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.314 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.314 [2024-07-15 06:57:22.909685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:35.583 [2024-07-15 06:57:23.000367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.583 [2024-07-15 06:57:23.000428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.583 [2024-07-15 06:57:23.000453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.583 [2024-07-15 06:57:23.000466] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.583 [2024-07-15 06:57:23.000476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.583 [2024-07-15 06:57:23.000541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.583 [2024-07-15 06:57:23.000668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.583 [2024-07-15 06:57:23.000672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 [2024-07-15 06:57:23.136071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 Malloc0 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.583 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.583 [2024-07-15 06:57:23.197439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 [2024-07-15 06:57:23.205297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 Malloc1 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=731070 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 731070 /var/tmp/bdevperf.sock 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 731070 ']' 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:35.840 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.097 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:36.097 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:36.097 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:36.097 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.097 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.354 NVMe0n1 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.354 1 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.354 request: 00:28:36.354 { 00:28:36.354 "name": "NVMe0", 00:28:36.354 "trtype": "tcp", 00:28:36.354 "traddr": "10.0.0.2", 00:28:36.354 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:36.354 "hostaddr": "10.0.0.2", 00:28:36.354 "hostsvcid": "60000", 00:28:36.354 "adrfam": "ipv4", 00:28:36.354 "trsvcid": "4420", 00:28:36.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.354 "method": "bdev_nvme_attach_controller", 00:28:36.354 "req_id": 1 00:28:36.354 } 00:28:36.354 Got JSON-RPC error response 00:28:36.354 response: 00:28:36.354 { 00:28:36.354 "code": -114, 00:28:36.354 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:36.354 } 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.354 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.354 request: 00:28:36.354 { 00:28:36.354 "name": "NVMe0", 00:28:36.354 "trtype": "tcp", 00:28:36.354 "traddr": "10.0.0.2", 00:28:36.354 "hostaddr": "10.0.0.2", 00:28:36.355 "hostsvcid": "60000", 00:28:36.355 "adrfam": "ipv4", 00:28:36.355 "trsvcid": "4420", 00:28:36.355 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.355 "method": "bdev_nvme_attach_controller", 00:28:36.355 "req_id": 1 00:28:36.355 } 00:28:36.355 Got JSON-RPC error response 00:28:36.355 response: 00:28:36.355 { 00:28:36.355 "code": -114, 00:28:36.355 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:36.355 } 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.355 request: 00:28:36.355 { 00:28:36.355 "name": "NVMe0", 00:28:36.355 "trtype": "tcp", 00:28:36.355 "traddr": "10.0.0.2", 00:28:36.355 "hostaddr": "10.0.0.2", 00:28:36.355 "hostsvcid": "60000", 00:28:36.355 "adrfam": "ipv4", 00:28:36.355 "trsvcid": "4420", 00:28:36.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.355 "multipath": "disable", 00:28:36.355 "method": "bdev_nvme_attach_controller", 00:28:36.355 "req_id": 1 00:28:36.355 } 00:28:36.355 Got JSON-RPC error response 00:28:36.355 response: 00:28:36.355 { 00:28:36.355 "code": -114, 00:28:36.355 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:36.355 } 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.355 request: 00:28:36.355 { 00:28:36.355 "name": "NVMe0", 00:28:36.355 "trtype": "tcp", 00:28:36.355 "traddr": "10.0.0.2", 00:28:36.355 "hostaddr": "10.0.0.2", 00:28:36.355 "hostsvcid": "60000", 00:28:36.355 "adrfam": "ipv4", 00:28:36.355 "trsvcid": "4420", 00:28:36.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.355 "multipath": "failover", 00:28:36.355 "method": "bdev_nvme_attach_controller", 00:28:36.355 "req_id": 1 00:28:36.355 } 00:28:36.355 Got JSON-RPC error response 00:28:36.355 response: 00:28:36.355 { 00:28:36.355 "code": -114, 00:28:36.355 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:36.355 } 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.355 06:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.612 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.612 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:36.612 06:57:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:37.983 0 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 731070 ']' 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 731070' 00:28:37.983 killing process with pid 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 731070 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:37.983 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:37.983 [2024-07-15 06:57:23.302054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:37.983 [2024-07-15 06:57:23.302158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731070 ] 00:28:37.983 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.983 [2024-07-15 06:57:23.364575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.983 [2024-07-15 06:57:23.450948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.983 [2024-07-15 06:57:24.084185] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name e8ed94f4-5c84-4ba9-ac61-3b0762b4f2cf already exists 00:28:37.983 [2024-07-15 06:57:24.084240] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:e8ed94f4-5c84-4ba9-ac61-3b0762b4f2cf alias for bdev NVMe1n1 00:28:37.983 [2024-07-15 06:57:24.084259] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:37.983 Running I/O for 1 seconds... 00:28:37.983 00:28:37.983 Latency(us) 00:28:37.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.983 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:37.983 NVMe0n1 : 1.01 17077.53 66.71 0.00 0.00 7463.71 6893.42 15825.73 00:28:37.983 =================================================================================================================== 00:28:37.983 Total : 17077.53 66.71 0.00 0.00 7463.71 6893.42 15825.73 00:28:37.983 Received shutdown signal, test time was about 1.000000 seconds 00:28:37.983 00:28:37.983 Latency(us) 00:28:37.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.983 =================================================================================================================== 00:28:37.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.983 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:37.983 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:37.984 rmmod nvme_tcp 00:28:37.984 rmmod nvme_fabrics 00:28:37.984 rmmod nvme_keyring 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 730923 ']' 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 730923 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 730923 ']' 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 730923 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 730923 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 730923' 00:28:37.984 killing process with pid 730923 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 730923 00:28:37.984 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 730923 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.550 06:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.450 06:57:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:40.450 00:28:40.450 real 0m7.324s 00:28:40.450 user 0m11.480s 00:28:40.450 sys 0m2.258s 00:28:40.450 06:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:40.450 06:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:40.450 ************************************ 00:28:40.450 END TEST nvmf_multicontroller 00:28:40.450 ************************************ 00:28:40.450 06:57:27 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:40.450 06:57:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:40.450 06:57:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:40.450 06:57:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:40.450 ************************************ 00:28:40.450 START TEST nvmf_aer 00:28:40.450 ************************************ 00:28:40.450 06:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:40.450 * Looking for test storage... 00:28:40.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.450 06:57:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.451 06:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:42.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:28:42.982 00:28:42.982 --- 10.0.0.2 ping statistics --- 00:28:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.982 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:28:42.982 00:28:42.982 --- 10.0.0.1 ping statistics --- 00:28:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.982 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:42.982 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=733275 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 733275 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 733275 ']' 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:42.983 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.983 [2024-07-15 06:57:30.383245] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:42.983 [2024-07-15 06:57:30.383336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.983 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.983 [2024-07-15 06:57:30.462896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.983 [2024-07-15 06:57:30.559397] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.983 [2024-07-15 06:57:30.559461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.983 [2024-07-15 06:57:30.559477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.983 [2024-07-15 06:57:30.559490] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.983 [2024-07-15 06:57:30.559502] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.983 [2024-07-15 06:57:30.559591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.983 [2024-07-15 06:57:30.559659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.983 [2024-07-15 06:57:30.559685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.983 [2024-07-15 06:57:30.559687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 [2024-07-15 06:57:30.720715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 Malloc0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 [2024-07-15 06:57:30.774059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 [ 00:28:43.267 { 00:28:43.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.267 "subtype": "Discovery", 00:28:43.267 "listen_addresses": [], 00:28:43.267 "allow_any_host": true, 00:28:43.267 "hosts": [] 00:28:43.267 }, 00:28:43.267 { 00:28:43.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.267 "subtype": "NVMe", 00:28:43.267 "listen_addresses": [ 00:28:43.267 { 00:28:43.267 "trtype": "TCP", 00:28:43.267 "adrfam": "IPv4", 00:28:43.267 "traddr": "10.0.0.2", 00:28:43.267 "trsvcid": "4420" 00:28:43.267 } 00:28:43.267 ], 00:28:43.267 "allow_any_host": true, 00:28:43.267 "hosts": [], 00:28:43.267 "serial_number": "SPDK00000000000001", 00:28:43.267 "model_number": "SPDK bdev Controller", 00:28:43.267 "max_namespaces": 2, 00:28:43.267 "min_cntlid": 1, 00:28:43.267 "max_cntlid": 65519, 00:28:43.267 "namespaces": [ 00:28:43.267 { 00:28:43.267 "nsid": 1, 00:28:43.267 "bdev_name": "Malloc0", 00:28:43.267 "name": "Malloc0", 00:28:43.267 "nguid": "F6AAFB76A72D40AA80F270C9F25DB749", 00:28:43.267 "uuid": "f6aafb76-a72d-40aa-80f2-70c9f25db749" 00:28:43.267 } 00:28:43.267 ] 00:28:43.267 } 00:28:43.267 ] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=733302 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:43.267 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:43.267 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.526 06:57:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.526 Malloc1 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.526 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.526 Asynchronous Event Request test 00:28:43.526 Attaching to 10.0.0.2 00:28:43.526 Attached to 10.0.0.2 00:28:43.526 Registering asynchronous event callbacks... 00:28:43.526 Starting namespace attribute notice tests for all controllers... 00:28:43.526 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:43.526 aer_cb - Changed Namespace 00:28:43.526 Cleaning up... 00:28:43.526 [ 00:28:43.526 { 00:28:43.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.526 "subtype": "Discovery", 00:28:43.526 "listen_addresses": [], 00:28:43.526 "allow_any_host": true, 00:28:43.526 "hosts": [] 00:28:43.526 }, 00:28:43.526 { 00:28:43.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.526 "subtype": "NVMe", 00:28:43.526 "listen_addresses": [ 00:28:43.526 { 00:28:43.526 "trtype": "TCP", 00:28:43.526 "adrfam": "IPv4", 00:28:43.526 "traddr": "10.0.0.2", 00:28:43.526 "trsvcid": "4420" 00:28:43.526 } 00:28:43.526 ], 00:28:43.526 "allow_any_host": true, 00:28:43.526 "hosts": [], 00:28:43.526 "serial_number": "SPDK00000000000001", 00:28:43.526 "model_number": "SPDK bdev Controller", 00:28:43.526 "max_namespaces": 2, 00:28:43.526 "min_cntlid": 1, 00:28:43.527 "max_cntlid": 65519, 00:28:43.527 "namespaces": [ 00:28:43.527 { 00:28:43.527 "nsid": 1, 00:28:43.527 "bdev_name": "Malloc0", 00:28:43.527 "name": "Malloc0", 00:28:43.527 "nguid": "F6AAFB76A72D40AA80F270C9F25DB749", 00:28:43.527 "uuid": "f6aafb76-a72d-40aa-80f2-70c9f25db749" 00:28:43.527 }, 00:28:43.527 { 00:28:43.527 "nsid": 2, 00:28:43.527 "bdev_name": "Malloc1", 00:28:43.527 "name": "Malloc1", 00:28:43.527 "nguid": "91F2B7C125034161B73ECAB179C7433E", 00:28:43.527 "uuid": "91f2b7c1-2503-4161-b73e-cab179c7433e" 00:28:43.527 } 00:28:43.527 ] 00:28:43.527 } 00:28:43.527 ] 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 733302 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.527 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.784 rmmod nvme_tcp 00:28:43.784 rmmod nvme_fabrics 00:28:43.784 rmmod nvme_keyring 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 733275 ']' 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 733275 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 733275 ']' 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 733275 00:28:43.784 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 733275 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 733275' 00:28:43.785 killing process with pid 733275 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 733275 00:28:43.785 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 733275 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.044 06:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.952 06:57:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.952 00:28:45.952 real 0m5.489s 00:28:45.952 user 0m4.175s 00:28:45.952 sys 0m2.028s 00:28:45.952 06:57:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:45.952 06:57:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:45.952 ************************************ 00:28:45.952 END TEST nvmf_aer 00:28:45.952 ************************************ 00:28:45.952 06:57:33 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:45.952 06:57:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:45.952 06:57:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:45.952 06:57:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:45.952 ************************************ 00:28:45.952 START TEST nvmf_async_init 00:28:45.952 ************************************ 00:28:45.952 06:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:46.209 * Looking for test storage... 00:28:46.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.209 06:57:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=15e6f28028cb42f88c984b138edb191d 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.210 06:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.110 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:48.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:28:48.111 00:28:48.111 --- 10.0.0.2 ping statistics --- 00:28:48.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.111 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:48.111 00:28:48.111 --- 10.0.0.1 ping statistics --- 00:28:48.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.111 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=735353 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 735353 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 735353 ']' 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:48.111 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.111 [2024-07-15 06:57:35.706732] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:48.111 [2024-07-15 06:57:35.706796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.370 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.370 [2024-07-15 06:57:35.773451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.370 [2024-07-15 06:57:35.862040] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.370 [2024-07-15 06:57:35.862110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.370 [2024-07-15 06:57:35.862137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.370 [2024-07-15 06:57:35.862150] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.370 [2024-07-15 06:57:35.862161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.370 [2024-07-15 06:57:35.862192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.370 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:48.370 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:48.370 06:57:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.370 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.370 06:57:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 [2024-07-15 06:57:36.009978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 null0 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 15e6f28028cb42f88c984b138edb191d 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 [2024-07-15 06:57:36.050245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.628 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.887 nvme0n1 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.887 [ 00:28:48.887 { 00:28:48.887 "name": "nvme0n1", 00:28:48.887 "aliases": [ 00:28:48.887 "15e6f280-28cb-42f8-8c98-4b138edb191d" 00:28:48.887 ], 00:28:48.887 "product_name": "NVMe disk", 00:28:48.887 "block_size": 512, 00:28:48.887 "num_blocks": 2097152, 00:28:48.887 "uuid": "15e6f280-28cb-42f8-8c98-4b138edb191d", 00:28:48.887 "assigned_rate_limits": { 00:28:48.887 "rw_ios_per_sec": 0, 00:28:48.887 "rw_mbytes_per_sec": 0, 00:28:48.887 "r_mbytes_per_sec": 0, 00:28:48.887 "w_mbytes_per_sec": 0 00:28:48.887 }, 00:28:48.887 "claimed": false, 00:28:48.887 "zoned": false, 00:28:48.887 "supported_io_types": { 00:28:48.887 "read": true, 00:28:48.887 "write": true, 00:28:48.887 "unmap": false, 00:28:48.887 "write_zeroes": true, 00:28:48.887 "flush": true, 00:28:48.887 "reset": true, 00:28:48.887 "compare": true, 00:28:48.887 "compare_and_write": true, 00:28:48.887 "abort": true, 00:28:48.887 "nvme_admin": true, 00:28:48.887 "nvme_io": true 00:28:48.887 }, 00:28:48.887 "memory_domains": [ 00:28:48.887 { 00:28:48.887 "dma_device_id": "system", 00:28:48.887 "dma_device_type": 1 00:28:48.887 } 00:28:48.887 ], 00:28:48.887 "driver_specific": { 00:28:48.887 "nvme": [ 00:28:48.887 { 00:28:48.887 "trid": { 00:28:48.887 "trtype": "TCP", 00:28:48.887 "adrfam": "IPv4", 00:28:48.887 "traddr": "10.0.0.2", 00:28:48.887 "trsvcid": "4420", 00:28:48.887 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:48.887 }, 00:28:48.887 "ctrlr_data": { 00:28:48.887 "cntlid": 1, 00:28:48.887 "vendor_id": "0x8086", 00:28:48.887 "model_number": "SPDK bdev Controller", 00:28:48.887 "serial_number": "00000000000000000000", 00:28:48.887 "firmware_revision": "24.05.1", 00:28:48.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.887 "oacs": { 00:28:48.887 "security": 0, 00:28:48.887 "format": 0, 00:28:48.887 "firmware": 0, 00:28:48.887 "ns_manage": 0 00:28:48.887 }, 00:28:48.887 "multi_ctrlr": true, 00:28:48.887 "ana_reporting": false 00:28:48.887 }, 00:28:48.887 "vs": { 00:28:48.887 "nvme_version": "1.3" 00:28:48.887 }, 00:28:48.887 "ns_data": { 00:28:48.887 "id": 1, 00:28:48.887 "can_share": true 00:28:48.887 } 00:28:48.887 } 00:28:48.887 ], 00:28:48.887 "mp_policy": "active_passive" 00:28:48.887 } 00:28:48.887 } 00:28:48.887 ] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.887 [2024-07-15 06:57:36.302837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:48.887 [2024-07-15 06:57:36.302943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ad760 (9): Bad file descriptor 00:28:48.887 [2024-07-15 06:57:36.445025] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.887 [ 00:28:48.887 { 00:28:48.887 "name": "nvme0n1", 00:28:48.887 "aliases": [ 00:28:48.887 "15e6f280-28cb-42f8-8c98-4b138edb191d" 00:28:48.887 ], 00:28:48.887 "product_name": "NVMe disk", 00:28:48.887 "block_size": 512, 00:28:48.887 "num_blocks": 2097152, 00:28:48.887 "uuid": "15e6f280-28cb-42f8-8c98-4b138edb191d", 00:28:48.887 "assigned_rate_limits": { 00:28:48.887 "rw_ios_per_sec": 0, 00:28:48.887 "rw_mbytes_per_sec": 0, 00:28:48.887 "r_mbytes_per_sec": 0, 00:28:48.887 "w_mbytes_per_sec": 0 00:28:48.887 }, 00:28:48.887 "claimed": false, 00:28:48.887 "zoned": false, 00:28:48.887 "supported_io_types": { 00:28:48.887 "read": true, 00:28:48.887 "write": true, 00:28:48.887 "unmap": false, 00:28:48.887 "write_zeroes": true, 00:28:48.887 "flush": true, 00:28:48.887 "reset": true, 00:28:48.887 "compare": true, 00:28:48.887 "compare_and_write": true, 00:28:48.887 "abort": true, 00:28:48.887 "nvme_admin": true, 00:28:48.887 "nvme_io": true 00:28:48.887 }, 00:28:48.887 "memory_domains": [ 00:28:48.887 { 00:28:48.887 "dma_device_id": "system", 00:28:48.887 "dma_device_type": 1 00:28:48.887 } 00:28:48.887 ], 00:28:48.887 "driver_specific": { 00:28:48.887 "nvme": [ 00:28:48.887 { 00:28:48.887 "trid": { 00:28:48.887 "trtype": "TCP", 00:28:48.887 "adrfam": "IPv4", 00:28:48.887 "traddr": "10.0.0.2", 00:28:48.887 "trsvcid": "4420", 00:28:48.887 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:48.887 }, 00:28:48.887 "ctrlr_data": { 00:28:48.887 "cntlid": 2, 00:28:48.887 "vendor_id": "0x8086", 00:28:48.887 "model_number": "SPDK bdev Controller", 00:28:48.887 "serial_number": "00000000000000000000", 00:28:48.887 "firmware_revision": "24.05.1", 00:28:48.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.887 "oacs": { 00:28:48.887 "security": 0, 00:28:48.887 "format": 0, 00:28:48.887 "firmware": 0, 00:28:48.887 "ns_manage": 0 00:28:48.887 }, 00:28:48.887 "multi_ctrlr": true, 00:28:48.887 "ana_reporting": false 00:28:48.887 }, 00:28:48.887 "vs": { 00:28:48.887 "nvme_version": "1.3" 00:28:48.887 }, 00:28:48.887 "ns_data": { 00:28:48.887 "id": 1, 00:28:48.887 "can_share": true 00:28:48.887 } 00:28:48.887 } 00:28:48.887 ], 00:28:48.887 "mp_policy": "active_passive" 00:28:48.887 } 00:28:48.887 } 00:28:48.887 ] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.887 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dh2cSX3EOL 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dh2cSX3EOL 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.888 [2024-07-15 06:57:36.495479] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:48.888 [2024-07-15 06:57:36.495606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dh2cSX3EOL 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.888 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:49.146 [2024-07-15 06:57:36.503502] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dh2cSX3EOL 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:49.146 [2024-07-15 06:57:36.511514] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:49.146 [2024-07-15 06:57:36.511572] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:49.146 nvme0n1 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:49.146 [ 00:28:49.146 { 00:28:49.146 "name": "nvme0n1", 00:28:49.146 "aliases": [ 00:28:49.146 "15e6f280-28cb-42f8-8c98-4b138edb191d" 00:28:49.146 ], 00:28:49.146 "product_name": "NVMe disk", 00:28:49.146 "block_size": 512, 00:28:49.146 "num_blocks": 2097152, 00:28:49.146 "uuid": "15e6f280-28cb-42f8-8c98-4b138edb191d", 00:28:49.146 "assigned_rate_limits": { 00:28:49.146 "rw_ios_per_sec": 0, 00:28:49.146 "rw_mbytes_per_sec": 0, 00:28:49.146 "r_mbytes_per_sec": 0, 00:28:49.146 "w_mbytes_per_sec": 0 00:28:49.146 }, 00:28:49.146 "claimed": false, 00:28:49.146 "zoned": false, 00:28:49.146 "supported_io_types": { 00:28:49.146 "read": true, 00:28:49.146 "write": true, 00:28:49.146 "unmap": false, 00:28:49.146 "write_zeroes": true, 00:28:49.146 "flush": true, 00:28:49.146 "reset": true, 00:28:49.146 "compare": true, 00:28:49.146 "compare_and_write": true, 00:28:49.146 "abort": true, 00:28:49.146 "nvme_admin": true, 00:28:49.146 "nvme_io": true 00:28:49.146 }, 00:28:49.146 "memory_domains": [ 00:28:49.146 { 00:28:49.146 "dma_device_id": "system", 00:28:49.146 "dma_device_type": 1 00:28:49.146 } 00:28:49.146 ], 00:28:49.146 "driver_specific": { 00:28:49.146 "nvme": [ 00:28:49.146 { 00:28:49.146 "trid": { 00:28:49.146 "trtype": "TCP", 00:28:49.146 "adrfam": "IPv4", 00:28:49.146 "traddr": "10.0.0.2", 00:28:49.146 "trsvcid": "4421", 00:28:49.146 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:49.146 }, 00:28:49.146 "ctrlr_data": { 00:28:49.146 "cntlid": 3, 00:28:49.146 "vendor_id": "0x8086", 00:28:49.146 "model_number": "SPDK bdev Controller", 00:28:49.146 "serial_number": "00000000000000000000", 00:28:49.146 "firmware_revision": "24.05.1", 00:28:49.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.146 "oacs": { 00:28:49.146 "security": 0, 00:28:49.146 "format": 0, 00:28:49.146 "firmware": 0, 00:28:49.146 "ns_manage": 0 00:28:49.146 }, 00:28:49.146 "multi_ctrlr": true, 00:28:49.146 "ana_reporting": false 00:28:49.146 }, 00:28:49.146 "vs": { 00:28:49.146 "nvme_version": "1.3" 00:28:49.146 }, 00:28:49.146 "ns_data": { 00:28:49.146 "id": 1, 00:28:49.146 "can_share": true 00:28:49.146 } 00:28:49.146 } 00:28:49.146 ], 00:28:49.146 "mp_policy": "active_passive" 00:28:49.146 } 00:28:49.146 } 00:28:49.146 ] 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.dh2cSX3EOL 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.146 rmmod nvme_tcp 00:28:49.146 rmmod nvme_fabrics 00:28:49.146 rmmod nvme_keyring 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 735353 ']' 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 735353 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 735353 ']' 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 735353 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 735353 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 735353' 00:28:49.146 killing process with pid 735353 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 735353 00:28:49.146 [2024-07-15 06:57:36.711698] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:49.146 [2024-07-15 06:57:36.711737] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:49.146 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 735353 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.405 06:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.939 06:57:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:51.939 00:28:51.939 real 0m5.424s 00:28:51.939 user 0m2.023s 00:28:51.939 sys 0m1.760s 00:28:51.939 06:57:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:51.939 06:57:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.939 ************************************ 00:28:51.939 END TEST nvmf_async_init 00:28:51.939 ************************************ 00:28:51.939 06:57:38 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:51.939 06:57:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:51.939 06:57:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:51.939 06:57:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.939 ************************************ 00:28:51.939 START TEST dma 00:28:51.939 ************************************ 00:28:51.939 06:57:38 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:51.939 * Looking for test storage... 00:28:51.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.939 06:57:39 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.939 06:57:39 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.939 06:57:39 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.939 06:57:39 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.939 06:57:39 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.939 06:57:39 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.939 06:57:39 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.939 06:57:39 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:51.939 06:57:39 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:51.939 06:57:39 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:51.939 06:57:39 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:51.939 06:57:39 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:51.939 00:28:51.939 real 0m0.065s 00:28:51.939 user 0m0.025s 00:28:51.939 sys 0m0.045s 00:28:51.939 06:57:39 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:51.939 06:57:39 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:51.939 ************************************ 00:28:51.939 END TEST dma 00:28:51.939 ************************************ 00:28:51.939 06:57:39 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:51.939 06:57:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:51.939 06:57:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:51.939 06:57:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.939 ************************************ 00:28:51.939 START TEST nvmf_identify 00:28:51.939 ************************************ 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:51.939 * Looking for test storage... 00:28:51.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.939 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:51.940 06:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:53.866 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.866 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:53.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:53.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:53.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:53.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:53.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:28:53.867 00:28:53.867 --- 10.0.0.2 ping statistics --- 00:28:53.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.867 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:53.867 00:28:53.867 --- 10.0.0.1 ping statistics --- 00:28:53.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.867 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=737394 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:53.867 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 737394 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 737394 ']' 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:53.868 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:53.868 [2024-07-15 06:57:41.333518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:53.868 [2024-07-15 06:57:41.333609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.868 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.868 [2024-07-15 06:57:41.408787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.125 [2024-07-15 06:57:41.501169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.125 [2024-07-15 06:57:41.501223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.125 [2024-07-15 06:57:41.501249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.125 [2024-07-15 06:57:41.501263] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.125 [2024-07-15 06:57:41.501275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.125 [2024-07-15 06:57:41.501342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.125 [2024-07-15 06:57:41.501412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.125 [2024-07-15 06:57:41.501508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.125 [2024-07-15 06:57:41.501510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 [2024-07-15 06:57:41.621331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 Malloc0 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 [2024-07-15 06:57:41.692302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.125 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.125 [ 00:28:54.125 { 00:28:54.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:54.125 "subtype": "Discovery", 00:28:54.125 "listen_addresses": [ 00:28:54.125 { 00:28:54.125 "trtype": "TCP", 00:28:54.125 "adrfam": "IPv4", 00:28:54.125 "traddr": "10.0.0.2", 00:28:54.125 "trsvcid": "4420" 00:28:54.125 } 00:28:54.125 ], 00:28:54.125 "allow_any_host": true, 00:28:54.125 "hosts": [] 00:28:54.125 }, 00:28:54.125 { 00:28:54.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.126 "subtype": "NVMe", 00:28:54.126 "listen_addresses": [ 00:28:54.126 { 00:28:54.126 "trtype": "TCP", 00:28:54.126 "adrfam": "IPv4", 00:28:54.126 "traddr": "10.0.0.2", 00:28:54.126 "trsvcid": "4420" 00:28:54.126 } 00:28:54.126 ], 00:28:54.126 "allow_any_host": true, 00:28:54.126 "hosts": [], 00:28:54.126 "serial_number": "SPDK00000000000001", 00:28:54.126 "model_number": "SPDK bdev Controller", 00:28:54.126 "max_namespaces": 32, 00:28:54.126 "min_cntlid": 1, 00:28:54.126 "max_cntlid": 65519, 00:28:54.126 "namespaces": [ 00:28:54.126 { 00:28:54.126 "nsid": 1, 00:28:54.126 "bdev_name": "Malloc0", 00:28:54.126 "name": "Malloc0", 00:28:54.126 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:54.126 "eui64": "ABCDEF0123456789", 00:28:54.126 "uuid": "1ca1f77c-a2fc-49dc-b95c-bb38a62ba005" 00:28:54.126 } 00:28:54.126 ] 00:28:54.126 } 00:28:54.126 ] 00:28:54.126 06:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.126 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:54.126 [2024-07-15 06:57:41.729544] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:54.126 [2024-07-15 06:57:41.729581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737503 ] 00:28:54.126 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.386 [2024-07-15 06:57:41.764328] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:54.386 [2024-07-15 06:57:41.764383] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:54.386 [2024-07-15 06:57:41.764393] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:54.386 [2024-07-15 06:57:41.764406] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:54.386 [2024-07-15 06:57:41.764419] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:54.386 [2024-07-15 06:57:41.764654] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:54.386 [2024-07-15 06:57:41.764711] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbc1980 0 00:28:54.386 [2024-07-15 06:57:41.778892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:54.386 [2024-07-15 06:57:41.778913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:54.386 [2024-07-15 06:57:41.778921] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:54.386 [2024-07-15 06:57:41.778927] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:54.386 [2024-07-15 06:57:41.778977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.778989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.778996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.386 [2024-07-15 06:57:41.779013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:54.386 [2024-07-15 06:57:41.779043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.386 [2024-07-15 06:57:41.786890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.386 [2024-07-15 06:57:41.786908] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.386 [2024-07-15 06:57:41.786915] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.786922] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.386 [2024-07-15 06:57:41.786937] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:54.386 [2024-07-15 06:57:41.786948] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:54.386 [2024-07-15 06:57:41.786957] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:54.386 [2024-07-15 06:57:41.786977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.786986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.786992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.386 [2024-07-15 06:57:41.787003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.386 [2024-07-15 06:57:41.787027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.386 [2024-07-15 06:57:41.787186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.386 [2024-07-15 06:57:41.787202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.386 [2024-07-15 06:57:41.787209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.386 [2024-07-15 06:57:41.787228] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:54.386 [2024-07-15 06:57:41.787242] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:54.386 [2024-07-15 06:57:41.787255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.386 [2024-07-15 06:57:41.787279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.386 [2024-07-15 06:57:41.787300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.386 [2024-07-15 06:57:41.787406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.386 [2024-07-15 06:57:41.787418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.386 [2024-07-15 06:57:41.787424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787431] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.386 [2024-07-15 06:57:41.787440] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:54.386 [2024-07-15 06:57:41.787453] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:54.386 [2024-07-15 06:57:41.787465] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.386 [2024-07-15 06:57:41.787489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.386 [2024-07-15 06:57:41.787514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.386 [2024-07-15 06:57:41.787622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.386 [2024-07-15 06:57:41.787637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.386 [2024-07-15 06:57:41.787644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.386 [2024-07-15 06:57:41.787660] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:54.386 [2024-07-15 06:57:41.787676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.386 [2024-07-15 06:57:41.787692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.386 [2024-07-15 06:57:41.787702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.386 [2024-07-15 06:57:41.787723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.386 [2024-07-15 06:57:41.787829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.386 [2024-07-15 06:57:41.787844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.787851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.787857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.787866] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:54.387 [2024-07-15 06:57:41.787874] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:54.387 [2024-07-15 06:57:41.787896] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:54.387 [2024-07-15 06:57:41.788006] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:54.387 [2024-07-15 06:57:41.788015] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:54.387 [2024-07-15 06:57:41.788028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.788068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.387 [2024-07-15 06:57:41.788089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.387 [2024-07-15 06:57:41.788267] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.387 [2024-07-15 06:57:41.788280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.788286] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788293] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.788302] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:54.387 [2024-07-15 06:57:41.788317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.788347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.387 [2024-07-15 06:57:41.788369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.387 [2024-07-15 06:57:41.788478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.387 [2024-07-15 06:57:41.788494] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.788501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.788515] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:54.387 [2024-07-15 06:57:41.788524] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.788537] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:54.387 [2024-07-15 06:57:41.788556] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.788574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.788594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.387 [2024-07-15 06:57:41.788615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.387 [2024-07-15 06:57:41.788761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.387 [2024-07-15 06:57:41.788773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.387 [2024-07-15 06:57:41.788780] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788786] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc1980): datao=0, datal=4096, cccid=0 00:28:54.387 [2024-07-15 06:57:41.788794] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc294c0) on tqpair(0xbc1980): expected_datao=0, payload_size=4096 00:28:54.387 [2024-07-15 06:57:41.788802] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788818] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.788828] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.387 [2024-07-15 06:57:41.829090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.829098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.829123] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:54.387 [2024-07-15 06:57:41.829133] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:54.387 [2024-07-15 06:57:41.829141] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:54.387 [2024-07-15 06:57:41.829149] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:54.387 [2024-07-15 06:57:41.829156] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:54.387 [2024-07-15 06:57:41.829164] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.829183] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.829196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829203] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.387 [2024-07-15 06:57:41.829244] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.387 [2024-07-15 06:57:41.829358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.387 [2024-07-15 06:57:41.829373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.829380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc294c0) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.829399] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.387 [2024-07-15 06:57:41.829433] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.387 [2024-07-15 06:57:41.829464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.387 [2024-07-15 06:57:41.829495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.387 [2024-07-15 06:57:41.829541] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.829560] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:54.387 [2024-07-15 06:57:41.829573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.829580] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.829590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.387 [2024-07-15 06:57:41.829627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc294c0, cid 0, qid 0 00:28:54.387 [2024-07-15 06:57:41.829638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29620, cid 1, qid 0 00:28:54.387 [2024-07-15 06:57:41.829649] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29780, cid 2, qid 0 00:28:54.387 [2024-07-15 06:57:41.829657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.387 [2024-07-15 06:57:41.829664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29a40, cid 4, qid 0 00:28:54.387 [2024-07-15 06:57:41.833884] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.387 [2024-07-15 06:57:41.833902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.387 [2024-07-15 06:57:41.833910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.833917] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29a40) on tqpair=0xbc1980 00:28:54.387 [2024-07-15 06:57:41.833926] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:54.387 [2024-07-15 06:57:41.833945] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:54.387 [2024-07-15 06:57:41.833964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.833973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc1980) 00:28:54.387 [2024-07-15 06:57:41.833984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.387 [2024-07-15 06:57:41.834005] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29a40, cid 4, qid 0 00:28:54.387 [2024-07-15 06:57:41.834165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.387 [2024-07-15 06:57:41.834178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.387 [2024-07-15 06:57:41.834185] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.387 [2024-07-15 06:57:41.834191] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc1980): datao=0, datal=4096, cccid=4 00:28:54.387 [2024-07-15 06:57:41.834199] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc29a40) on tqpair(0xbc1980): expected_datao=0, payload_size=4096 00:28:54.388 [2024-07-15 06:57:41.834206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834216] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834224] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.388 [2024-07-15 06:57:41.834257] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.388 [2024-07-15 06:57:41.834264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29a40) on tqpair=0xbc1980 00:28:54.388 [2024-07-15 06:57:41.834288] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:54.388 [2024-07-15 06:57:41.834327] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834338] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc1980) 00:28:54.388 [2024-07-15 06:57:41.834349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.388 [2024-07-15 06:57:41.834360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834367] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbc1980) 00:28:54.388 [2024-07-15 06:57:41.834382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.388 [2024-07-15 06:57:41.834411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29a40, cid 4, qid 0 00:28:54.388 [2024-07-15 06:57:41.834422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29ba0, cid 5, qid 0 00:28:54.388 [2024-07-15 06:57:41.834571] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.388 [2024-07-15 06:57:41.834584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.388 [2024-07-15 06:57:41.834591] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834597] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc1980): datao=0, datal=1024, cccid=4 00:28:54.388 [2024-07-15 06:57:41.834604] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc29a40) on tqpair(0xbc1980): expected_datao=0, payload_size=1024 00:28:54.388 [2024-07-15 06:57:41.834612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834621] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834629] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.388 [2024-07-15 06:57:41.834646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.388 [2024-07-15 06:57:41.834653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.834659] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29ba0) on tqpair=0xbc1980 00:28:54.388 [2024-07-15 06:57:41.875009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.388 [2024-07-15 06:57:41.875027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.388 [2024-07-15 06:57:41.875034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29a40) on tqpair=0xbc1980 00:28:54.388 [2024-07-15 06:57:41.875058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc1980) 00:28:54.388 [2024-07-15 06:57:41.875078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.388 [2024-07-15 06:57:41.875108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29a40, cid 4, qid 0 00:28:54.388 [2024-07-15 06:57:41.875235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.388 [2024-07-15 06:57:41.875248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.388 [2024-07-15 06:57:41.875255] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875261] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc1980): datao=0, datal=3072, cccid=4 00:28:54.388 [2024-07-15 06:57:41.875269] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc29a40) on tqpair(0xbc1980): expected_datao=0, payload_size=3072 00:28:54.388 [2024-07-15 06:57:41.875276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875286] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875294] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.388 [2024-07-15 06:57:41.875316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.388 [2024-07-15 06:57:41.875322] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875329] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29a40) on tqpair=0xbc1980 00:28:54.388 [2024-07-15 06:57:41.875343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbc1980) 00:28:54.388 [2024-07-15 06:57:41.875362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.388 [2024-07-15 06:57:41.875389] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc29a40, cid 4, qid 0 00:28:54.388 [2024-07-15 06:57:41.875515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.388 [2024-07-15 06:57:41.875528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.388 [2024-07-15 06:57:41.875535] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875541] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbc1980): datao=0, datal=8, cccid=4 00:28:54.388 [2024-07-15 06:57:41.875548] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc29a40) on tqpair(0xbc1980): expected_datao=0, payload_size=8 00:28:54.388 [2024-07-15 06:57:41.875556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875565] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.875573] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.916038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.388 [2024-07-15 06:57:41.916057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.388 [2024-07-15 06:57:41.916064] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.388 [2024-07-15 06:57:41.916071] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc29a40) on tqpair=0xbc1980 00:28:54.388 ===================================================== 00:28:54.388 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:54.388 ===================================================== 00:28:54.388 Controller Capabilities/Features 00:28:54.388 ================================ 00:28:54.388 Vendor ID: 0000 00:28:54.388 Subsystem Vendor ID: 0000 00:28:54.388 Serial Number: .................... 00:28:54.388 Model Number: ........................................ 00:28:54.388 Firmware Version: 24.05.1 00:28:54.388 Recommended Arb Burst: 0 00:28:54.388 IEEE OUI Identifier: 00 00 00 00:28:54.388 Multi-path I/O 00:28:54.388 May have multiple subsystem ports: No 00:28:54.388 May have multiple controllers: No 00:28:54.388 Associated with SR-IOV VF: No 00:28:54.388 Max Data Transfer Size: 131072 00:28:54.388 Max Number of Namespaces: 0 00:28:54.388 Max Number of I/O Queues: 1024 00:28:54.388 NVMe Specification Version (VS): 1.3 00:28:54.388 NVMe Specification Version (Identify): 1.3 00:28:54.388 Maximum Queue Entries: 128 00:28:54.388 Contiguous Queues Required: Yes 00:28:54.388 Arbitration Mechanisms Supported 00:28:54.388 Weighted Round Robin: Not Supported 00:28:54.388 Vendor Specific: Not Supported 00:28:54.388 Reset Timeout: 15000 ms 00:28:54.388 Doorbell Stride: 4 bytes 00:28:54.388 NVM Subsystem Reset: Not Supported 00:28:54.388 Command Sets Supported 00:28:54.388 NVM Command Set: Supported 00:28:54.388 Boot Partition: Not Supported 00:28:54.388 Memory Page Size Minimum: 4096 bytes 00:28:54.388 Memory Page Size Maximum: 4096 bytes 00:28:54.388 Persistent Memory Region: Not Supported 00:28:54.388 Optional Asynchronous Events Supported 00:28:54.388 Namespace Attribute Notices: Not Supported 00:28:54.388 Firmware Activation Notices: Not Supported 00:28:54.388 ANA Change Notices: Not Supported 00:28:54.388 PLE Aggregate Log Change Notices: Not Supported 00:28:54.388 LBA Status Info Alert Notices: Not Supported 00:28:54.388 EGE Aggregate Log Change Notices: Not Supported 00:28:54.388 Normal NVM Subsystem Shutdown event: Not Supported 00:28:54.388 Zone Descriptor Change Notices: Not Supported 00:28:54.388 Discovery Log Change Notices: Supported 00:28:54.388 Controller Attributes 00:28:54.388 128-bit Host Identifier: Not Supported 00:28:54.388 Non-Operational Permissive Mode: Not Supported 00:28:54.388 NVM Sets: Not Supported 00:28:54.388 Read Recovery Levels: Not Supported 00:28:54.388 Endurance Groups: Not Supported 00:28:54.388 Predictable Latency Mode: Not Supported 00:28:54.388 Traffic Based Keep ALive: Not Supported 00:28:54.388 Namespace Granularity: Not Supported 00:28:54.388 SQ Associations: Not Supported 00:28:54.388 UUID List: Not Supported 00:28:54.388 Multi-Domain Subsystem: Not Supported 00:28:54.388 Fixed Capacity Management: Not Supported 00:28:54.388 Variable Capacity Management: Not Supported 00:28:54.388 Delete Endurance Group: Not Supported 00:28:54.388 Delete NVM Set: Not Supported 00:28:54.388 Extended LBA Formats Supported: Not Supported 00:28:54.388 Flexible Data Placement Supported: Not Supported 00:28:54.388 00:28:54.388 Controller Memory Buffer Support 00:28:54.388 ================================ 00:28:54.388 Supported: No 00:28:54.388 00:28:54.388 Persistent Memory Region Support 00:28:54.388 ================================ 00:28:54.388 Supported: No 00:28:54.388 00:28:54.388 Admin Command Set Attributes 00:28:54.388 ============================ 00:28:54.388 Security Send/Receive: Not Supported 00:28:54.388 Format NVM: Not Supported 00:28:54.388 Firmware Activate/Download: Not Supported 00:28:54.388 Namespace Management: Not Supported 00:28:54.388 Device Self-Test: Not Supported 00:28:54.388 Directives: Not Supported 00:28:54.388 NVMe-MI: Not Supported 00:28:54.388 Virtualization Management: Not Supported 00:28:54.388 Doorbell Buffer Config: Not Supported 00:28:54.388 Get LBA Status Capability: Not Supported 00:28:54.388 Command & Feature Lockdown Capability: Not Supported 00:28:54.388 Abort Command Limit: 1 00:28:54.388 Async Event Request Limit: 4 00:28:54.389 Number of Firmware Slots: N/A 00:28:54.389 Firmware Slot 1 Read-Only: N/A 00:28:54.389 Firmware Activation Without Reset: N/A 00:28:54.389 Multiple Update Detection Support: N/A 00:28:54.389 Firmware Update Granularity: No Information Provided 00:28:54.389 Per-Namespace SMART Log: No 00:28:54.389 Asymmetric Namespace Access Log Page: Not Supported 00:28:54.389 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:54.389 Command Effects Log Page: Not Supported 00:28:54.389 Get Log Page Extended Data: Supported 00:28:54.389 Telemetry Log Pages: Not Supported 00:28:54.389 Persistent Event Log Pages: Not Supported 00:28:54.389 Supported Log Pages Log Page: May Support 00:28:54.389 Commands Supported & Effects Log Page: Not Supported 00:28:54.389 Feature Identifiers & Effects Log Page:May Support 00:28:54.389 NVMe-MI Commands & Effects Log Page: May Support 00:28:54.389 Data Area 4 for Telemetry Log: Not Supported 00:28:54.389 Error Log Page Entries Supported: 128 00:28:54.389 Keep Alive: Not Supported 00:28:54.389 00:28:54.389 NVM Command Set Attributes 00:28:54.389 ========================== 00:28:54.389 Submission Queue Entry Size 00:28:54.389 Max: 1 00:28:54.389 Min: 1 00:28:54.389 Completion Queue Entry Size 00:28:54.389 Max: 1 00:28:54.389 Min: 1 00:28:54.389 Number of Namespaces: 0 00:28:54.389 Compare Command: Not Supported 00:28:54.389 Write Uncorrectable Command: Not Supported 00:28:54.389 Dataset Management Command: Not Supported 00:28:54.389 Write Zeroes Command: Not Supported 00:28:54.389 Set Features Save Field: Not Supported 00:28:54.389 Reservations: Not Supported 00:28:54.389 Timestamp: Not Supported 00:28:54.389 Copy: Not Supported 00:28:54.389 Volatile Write Cache: Not Present 00:28:54.389 Atomic Write Unit (Normal): 1 00:28:54.389 Atomic Write Unit (PFail): 1 00:28:54.389 Atomic Compare & Write Unit: 1 00:28:54.389 Fused Compare & Write: Supported 00:28:54.389 Scatter-Gather List 00:28:54.389 SGL Command Set: Supported 00:28:54.389 SGL Keyed: Supported 00:28:54.389 SGL Bit Bucket Descriptor: Not Supported 00:28:54.389 SGL Metadata Pointer: Not Supported 00:28:54.389 Oversized SGL: Not Supported 00:28:54.389 SGL Metadata Address: Not Supported 00:28:54.389 SGL Offset: Supported 00:28:54.389 Transport SGL Data Block: Not Supported 00:28:54.389 Replay Protected Memory Block: Not Supported 00:28:54.389 00:28:54.389 Firmware Slot Information 00:28:54.389 ========================= 00:28:54.389 Active slot: 0 00:28:54.389 00:28:54.389 00:28:54.389 Error Log 00:28:54.389 ========= 00:28:54.389 00:28:54.389 Active Namespaces 00:28:54.389 ================= 00:28:54.389 Discovery Log Page 00:28:54.389 ================== 00:28:54.389 Generation Counter: 2 00:28:54.389 Number of Records: 2 00:28:54.389 Record Format: 0 00:28:54.389 00:28:54.389 Discovery Log Entry 0 00:28:54.389 ---------------------- 00:28:54.389 Transport Type: 3 (TCP) 00:28:54.389 Address Family: 1 (IPv4) 00:28:54.389 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:54.389 Entry Flags: 00:28:54.389 Duplicate Returned Information: 1 00:28:54.389 Explicit Persistent Connection Support for Discovery: 1 00:28:54.389 Transport Requirements: 00:28:54.389 Secure Channel: Not Required 00:28:54.389 Port ID: 0 (0x0000) 00:28:54.389 Controller ID: 65535 (0xffff) 00:28:54.389 Admin Max SQ Size: 128 00:28:54.389 Transport Service Identifier: 4420 00:28:54.389 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:54.389 Transport Address: 10.0.0.2 00:28:54.389 Discovery Log Entry 1 00:28:54.389 ---------------------- 00:28:54.389 Transport Type: 3 (TCP) 00:28:54.389 Address Family: 1 (IPv4) 00:28:54.389 Subsystem Type: 2 (NVM Subsystem) 00:28:54.389 Entry Flags: 00:28:54.389 Duplicate Returned Information: 0 00:28:54.389 Explicit Persistent Connection Support for Discovery: 0 00:28:54.389 Transport Requirements: 00:28:54.389 Secure Channel: Not Required 00:28:54.389 Port ID: 0 (0x0000) 00:28:54.389 Controller ID: 65535 (0xffff) 00:28:54.389 Admin Max SQ Size: 128 00:28:54.389 Transport Service Identifier: 4420 00:28:54.389 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:54.389 Transport Address: 10.0.0.2 [2024-07-15 06:57:41.916188] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:54.389 [2024-07-15 06:57:41.916212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.389 [2024-07-15 06:57:41.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.389 [2024-07-15 06:57:41.916234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.389 [2024-07-15 06:57:41.916244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.389 [2024-07-15 06:57:41.916261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.916287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.389 [2024-07-15 06:57:41.916333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.389 [2024-07-15 06:57:41.916515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.389 [2024-07-15 06:57:41.916531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.389 [2024-07-15 06:57:41.916538] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916544] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.389 [2024-07-15 06:57:41.916557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.916581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.389 [2024-07-15 06:57:41.916607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.389 [2024-07-15 06:57:41.916729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.389 [2024-07-15 06:57:41.916744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.389 [2024-07-15 06:57:41.916751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.389 [2024-07-15 06:57:41.916765] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:54.389 [2024-07-15 06:57:41.916777] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:54.389 [2024-07-15 06:57:41.916794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916803] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.916820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.389 [2024-07-15 06:57:41.916840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.389 [2024-07-15 06:57:41.916957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.389 [2024-07-15 06:57:41.916973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.389 [2024-07-15 06:57:41.916980] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.916987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.389 [2024-07-15 06:57:41.917003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917013] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.917029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.389 [2024-07-15 06:57:41.917050] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.389 [2024-07-15 06:57:41.917155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.389 [2024-07-15 06:57:41.917167] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.389 [2024-07-15 06:57:41.917174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917181] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.389 [2024-07-15 06:57:41.917197] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.917223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.389 [2024-07-15 06:57:41.917243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.389 [2024-07-15 06:57:41.917350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.389 [2024-07-15 06:57:41.917365] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.389 [2024-07-15 06:57:41.917372] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.389 [2024-07-15 06:57:41.917394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917403] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.389 [2024-07-15 06:57:41.917410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.389 [2024-07-15 06:57:41.917420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.390 [2024-07-15 06:57:41.917440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.390 [2024-07-15 06:57:41.917542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.390 [2024-07-15 06:57:41.917557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.390 [2024-07-15 06:57:41.917564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917574] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.390 [2024-07-15 06:57:41.917591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.390 [2024-07-15 06:57:41.917618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.390 [2024-07-15 06:57:41.917638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.390 [2024-07-15 06:57:41.917738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.390 [2024-07-15 06:57:41.917750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.390 [2024-07-15 06:57:41.917757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.390 [2024-07-15 06:57:41.917780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.917795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.390 [2024-07-15 06:57:41.917805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.390 [2024-07-15 06:57:41.917825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.390 [2024-07-15 06:57:41.921889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.390 [2024-07-15 06:57:41.921905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.390 [2024-07-15 06:57:41.921912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.921918] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.390 [2024-07-15 06:57:41.921936] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.921945] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.921951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbc1980) 00:28:54.390 [2024-07-15 06:57:41.921962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.390 [2024-07-15 06:57:41.921983] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc298e0, cid 3, qid 0 00:28:54.390 [2024-07-15 06:57:41.922129] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.390 [2024-07-15 06:57:41.922141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.390 [2024-07-15 06:57:41.922148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.390 [2024-07-15 06:57:41.922155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc298e0) on tqpair=0xbc1980 00:28:54.390 [2024-07-15 06:57:41.922168] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:54.390 00:28:54.390 06:57:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:54.390 [2024-07-15 06:57:41.956404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:54.390 [2024-07-15 06:57:41.956449] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737506 ] 00:28:54.390 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.390 [2024-07-15 06:57:41.991629] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:54.390 [2024-07-15 06:57:41.991678] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:54.390 [2024-07-15 06:57:41.991688] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:54.390 [2024-07-15 06:57:41.991701] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:54.390 [2024-07-15 06:57:41.991713] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:54.390 [2024-07-15 06:57:41.991951] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:54.390 [2024-07-15 06:57:41.991991] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1aaa980 0 00:28:54.654 [2024-07-15 06:57:41.998894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:54.654 [2024-07-15 06:57:41.998914] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:54.654 [2024-07-15 06:57:41.998921] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:54.654 [2024-07-15 06:57:41.998928] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:54.654 [2024-07-15 06:57:41.998967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:41.998979] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:41.998986] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.654 [2024-07-15 06:57:41.999000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:54.654 [2024-07-15 06:57:41.999026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.654 [2024-07-15 06:57:42.006895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.654 [2024-07-15 06:57:42.006913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.654 [2024-07-15 06:57:42.006920] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:42.006928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.654 [2024-07-15 06:57:42.006947] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:54.654 [2024-07-15 06:57:42.006957] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:54.654 [2024-07-15 06:57:42.006967] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:54.654 [2024-07-15 06:57:42.006985] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:42.006994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:42.007001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.654 [2024-07-15 06:57:42.007012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.654 [2024-07-15 06:57:42.007036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.654 [2024-07-15 06:57:42.007187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.654 [2024-07-15 06:57:42.007199] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.654 [2024-07-15 06:57:42.007206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.654 [2024-07-15 06:57:42.007213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.654 [2024-07-15 06:57:42.007227] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:54.654 [2024-07-15 06:57:42.007241] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:54.654 [2024-07-15 06:57:42.007258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.007283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.007305] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.007420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.007432] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.007439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.007456] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:54.655 [2024-07-15 06:57:42.007469] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.007481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.007505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.007525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.007630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.007642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.007649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.007665] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.007681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.007707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.007728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.007828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.007840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.007847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.007854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.007862] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:54.655 [2024-07-15 06:57:42.007871] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.007891] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.008002] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:54.655 [2024-07-15 06:57:42.008013] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.008026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.008051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.008072] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.008210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.008225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.008232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008239] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.008248] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:54.655 [2024-07-15 06:57:42.008265] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.008291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.008311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.008420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.008435] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.008441] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008448] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.008457] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:54.655 [2024-07-15 06:57:42.008465] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:54.655 [2024-07-15 06:57:42.008479] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:54.655 [2024-07-15 06:57:42.008492] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:54.655 [2024-07-15 06:57:42.008508] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008517] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.008528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.655 [2024-07-15 06:57:42.008549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.008701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.655 [2024-07-15 06:57:42.008716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.655 [2024-07-15 06:57:42.008723] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008729] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=4096, cccid=0 00:28:54.655 [2024-07-15 06:57:42.008737] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b124c0) on tqpair(0x1aaa980): expected_datao=0, payload_size=4096 00:28:54.655 [2024-07-15 06:57:42.008748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008760] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008767] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008779] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.008789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.008796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008802] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.008818] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:54.655 [2024-07-15 06:57:42.008828] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:54.655 [2024-07-15 06:57:42.008835] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:54.655 [2024-07-15 06:57:42.008842] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:54.655 [2024-07-15 06:57:42.008850] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:54.655 [2024-07-15 06:57:42.008858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:54.655 [2024-07-15 06:57:42.008872] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:54.655 [2024-07-15 06:57:42.008892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.008906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.008917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.655 [2024-07-15 06:57:42.008939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.655 [2024-07-15 06:57:42.009053] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.655 [2024-07-15 06:57:42.009065] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.655 [2024-07-15 06:57:42.009072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b124c0) on tqpair=0x1aaa980 00:28:54.655 [2024-07-15 06:57:42.009090] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009098] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009104] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.009114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.655 [2024-07-15 06:57:42.009124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009137] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.009145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.655 [2024-07-15 06:57:42.009155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.009181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.655 [2024-07-15 06:57:42.009192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.655 [2024-07-15 06:57:42.009205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.655 [2024-07-15 06:57:42.009228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.656 [2024-07-15 06:57:42.009238] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009256] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.009285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.656 [2024-07-15 06:57:42.009307] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b124c0, cid 0, qid 0 00:28:54.656 [2024-07-15 06:57:42.009333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12620, cid 1, qid 0 00:28:54.656 [2024-07-15 06:57:42.009341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12780, cid 2, qid 0 00:28:54.656 [2024-07-15 06:57:42.009349] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.656 [2024-07-15 06:57:42.009356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.009519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.009532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.009539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.009554] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:54.656 [2024-07-15 06:57:42.009563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009577] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009587] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009598] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.009636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.656 [2024-07-15 06:57:42.009657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.009832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.009847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.009854] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009861] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.009942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009963] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.009978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.009985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.009996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.656 [2024-07-15 06:57:42.010018] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.010164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.656 [2024-07-15 06:57:42.010177] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.656 [2024-07-15 06:57:42.010183] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010190] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=4096, cccid=4 00:28:54.656 [2024-07-15 06:57:42.010198] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12a40) on tqpair(0x1aaa980): expected_datao=0, payload_size=4096 00:28:54.656 [2024-07-15 06:57:42.010205] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010221] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010230] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.010305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.010311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.010334] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:54.656 [2024-07-15 06:57:42.010357] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.010375] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.010389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.010408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.656 [2024-07-15 06:57:42.010430] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.010563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.656 [2024-07-15 06:57:42.010579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.656 [2024-07-15 06:57:42.010585] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010592] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=4096, cccid=4 00:28:54.656 [2024-07-15 06:57:42.010599] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12a40) on tqpair(0x1aaa980): expected_datao=0, payload_size=4096 00:28:54.656 [2024-07-15 06:57:42.010607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010623] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010633] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010694] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.010709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.010719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010727] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.010748] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.010766] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.010781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.010789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.010800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.656 [2024-07-15 06:57:42.010821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.014888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.656 [2024-07-15 06:57:42.014905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.656 [2024-07-15 06:57:42.014912] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.014918] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=4096, cccid=4 00:28:54.656 [2024-07-15 06:57:42.014925] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12a40) on tqpair(0x1aaa980): expected_datao=0, payload_size=4096 00:28:54.656 [2024-07-15 06:57:42.014932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.014942] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.014949] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.014958] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.014966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.014973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.014979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.014993] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015008] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015038] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015050] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015059] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015067] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:54.656 [2024-07-15 06:57:42.015075] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:54.656 [2024-07-15 06:57:42.015083] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:54.656 [2024-07-15 06:57:42.015107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.015116] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.015127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.656 [2024-07-15 06:57:42.015142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.015150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.015172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aaa980) 00:28:54.656 [2024-07-15 06:57:42.015181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.656 [2024-07-15 06:57:42.015207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.656 [2024-07-15 06:57:42.015219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12ba0, cid 5, qid 0 00:28:54.656 [2024-07-15 06:57:42.015397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.015409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.015416] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.656 [2024-07-15 06:57:42.015423] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.656 [2024-07-15 06:57:42.015435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.656 [2024-07-15 06:57:42.015444] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.656 [2024-07-15 06:57:42.015451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12ba0) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.015474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.015493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.015514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12ba0, cid 5, qid 0 00:28:54.657 [2024-07-15 06:57:42.015650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.015662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.015668] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015675] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12ba0) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.015692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.015711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.015730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12ba0, cid 5, qid 0 00:28:54.657 [2024-07-15 06:57:42.015848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.015863] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.015869] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015882] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12ba0) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.015901] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.015910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.015921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.015942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12ba0, cid 5, qid 0 00:28:54.657 [2024-07-15 06:57:42.016048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.016060] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.016071] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016078] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12ba0) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.016098] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.016119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.016130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016138] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.016147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.016159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016166] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.016176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.016187] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1aaa980) 00:28:54.657 [2024-07-15 06:57:42.016204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.657 [2024-07-15 06:57:42.016240] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12ba0, cid 5, qid 0 00:28:54.657 [2024-07-15 06:57:42.016251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12a40, cid 4, qid 0 00:28:54.657 [2024-07-15 06:57:42.016259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12d00, cid 6, qid 0 00:28:54.657 [2024-07-15 06:57:42.016266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12e60, cid 7, qid 0 00:28:54.657 [2024-07-15 06:57:42.016520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.657 [2024-07-15 06:57:42.016533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.657 [2024-07-15 06:57:42.016540] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016547] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=8192, cccid=5 00:28:54.657 [2024-07-15 06:57:42.016554] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12ba0) on tqpair(0x1aaa980): expected_datao=0, payload_size=8192 00:28:54.657 [2024-07-15 06:57:42.016562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016597] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016607] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016616] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.657 [2024-07-15 06:57:42.016625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.657 [2024-07-15 06:57:42.016631] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016638] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=512, cccid=4 00:28:54.657 [2024-07-15 06:57:42.016645] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12a40) on tqpair(0x1aaa980): expected_datao=0, payload_size=512 00:28:54.657 [2024-07-15 06:57:42.016652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016661] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016672] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.657 [2024-07-15 06:57:42.016690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.657 [2024-07-15 06:57:42.016696] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016703] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=512, cccid=6 00:28:54.657 [2024-07-15 06:57:42.016710] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12d00) on tqpair(0x1aaa980): expected_datao=0, payload_size=512 00:28:54.657 [2024-07-15 06:57:42.016717] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016726] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016733] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:54.657 [2024-07-15 06:57:42.016750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:54.657 [2024-07-15 06:57:42.016757] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016763] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aaa980): datao=0, datal=4096, cccid=7 00:28:54.657 [2024-07-15 06:57:42.016770] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b12e60) on tqpair(0x1aaa980): expected_datao=0, payload_size=4096 00:28:54.657 [2024-07-15 06:57:42.016777] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016787] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016794] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.016815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.016821] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016828] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12ba0) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.016848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.016859] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.016866] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12a40) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.016895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.016907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.016913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12d00) on tqpair=0x1aaa980 00:28:54.657 [2024-07-15 06:57:42.016934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.657 [2024-07-15 06:57:42.016956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.657 [2024-07-15 06:57:42.016963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.657 [2024-07-15 06:57:42.016970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12e60) on tqpair=0x1aaa980 00:28:54.657 ===================================================== 00:28:54.657 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.657 ===================================================== 00:28:54.657 Controller Capabilities/Features 00:28:54.657 ================================ 00:28:54.657 Vendor ID: 8086 00:28:54.657 Subsystem Vendor ID: 8086 00:28:54.657 Serial Number: SPDK00000000000001 00:28:54.657 Model Number: SPDK bdev Controller 00:28:54.657 Firmware Version: 24.05.1 00:28:54.657 Recommended Arb Burst: 6 00:28:54.657 IEEE OUI Identifier: e4 d2 5c 00:28:54.657 Multi-path I/O 00:28:54.657 May have multiple subsystem ports: Yes 00:28:54.657 May have multiple controllers: Yes 00:28:54.657 Associated with SR-IOV VF: No 00:28:54.657 Max Data Transfer Size: 131072 00:28:54.657 Max Number of Namespaces: 32 00:28:54.657 Max Number of I/O Queues: 127 00:28:54.657 NVMe Specification Version (VS): 1.3 00:28:54.657 NVMe Specification Version (Identify): 1.3 00:28:54.657 Maximum Queue Entries: 128 00:28:54.657 Contiguous Queues Required: Yes 00:28:54.657 Arbitration Mechanisms Supported 00:28:54.657 Weighted Round Robin: Not Supported 00:28:54.657 Vendor Specific: Not Supported 00:28:54.657 Reset Timeout: 15000 ms 00:28:54.657 Doorbell Stride: 4 bytes 00:28:54.657 NVM Subsystem Reset: Not Supported 00:28:54.657 Command Sets Supported 00:28:54.657 NVM Command Set: Supported 00:28:54.657 Boot Partition: Not Supported 00:28:54.657 Memory Page Size Minimum: 4096 bytes 00:28:54.657 Memory Page Size Maximum: 4096 bytes 00:28:54.657 Persistent Memory Region: Not Supported 00:28:54.657 Optional Asynchronous Events Supported 00:28:54.657 Namespace Attribute Notices: Supported 00:28:54.657 Firmware Activation Notices: Not Supported 00:28:54.657 ANA Change Notices: Not Supported 00:28:54.657 PLE Aggregate Log Change Notices: Not Supported 00:28:54.657 LBA Status Info Alert Notices: Not Supported 00:28:54.658 EGE Aggregate Log Change Notices: Not Supported 00:28:54.658 Normal NVM Subsystem Shutdown event: Not Supported 00:28:54.658 Zone Descriptor Change Notices: Not Supported 00:28:54.658 Discovery Log Change Notices: Not Supported 00:28:54.658 Controller Attributes 00:28:54.658 128-bit Host Identifier: Supported 00:28:54.658 Non-Operational Permissive Mode: Not Supported 00:28:54.658 NVM Sets: Not Supported 00:28:54.658 Read Recovery Levels: Not Supported 00:28:54.658 Endurance Groups: Not Supported 00:28:54.658 Predictable Latency Mode: Not Supported 00:28:54.658 Traffic Based Keep ALive: Not Supported 00:28:54.658 Namespace Granularity: Not Supported 00:28:54.658 SQ Associations: Not Supported 00:28:54.658 UUID List: Not Supported 00:28:54.658 Multi-Domain Subsystem: Not Supported 00:28:54.658 Fixed Capacity Management: Not Supported 00:28:54.658 Variable Capacity Management: Not Supported 00:28:54.658 Delete Endurance Group: Not Supported 00:28:54.658 Delete NVM Set: Not Supported 00:28:54.658 Extended LBA Formats Supported: Not Supported 00:28:54.658 Flexible Data Placement Supported: Not Supported 00:28:54.658 00:28:54.658 Controller Memory Buffer Support 00:28:54.658 ================================ 00:28:54.658 Supported: No 00:28:54.658 00:28:54.658 Persistent Memory Region Support 00:28:54.658 ================================ 00:28:54.658 Supported: No 00:28:54.658 00:28:54.658 Admin Command Set Attributes 00:28:54.658 ============================ 00:28:54.658 Security Send/Receive: Not Supported 00:28:54.658 Format NVM: Not Supported 00:28:54.658 Firmware Activate/Download: Not Supported 00:28:54.658 Namespace Management: Not Supported 00:28:54.658 Device Self-Test: Not Supported 00:28:54.658 Directives: Not Supported 00:28:54.658 NVMe-MI: Not Supported 00:28:54.658 Virtualization Management: Not Supported 00:28:54.658 Doorbell Buffer Config: Not Supported 00:28:54.658 Get LBA Status Capability: Not Supported 00:28:54.658 Command & Feature Lockdown Capability: Not Supported 00:28:54.658 Abort Command Limit: 4 00:28:54.658 Async Event Request Limit: 4 00:28:54.658 Number of Firmware Slots: N/A 00:28:54.658 Firmware Slot 1 Read-Only: N/A 00:28:54.658 Firmware Activation Without Reset: N/A 00:28:54.658 Multiple Update Detection Support: N/A 00:28:54.658 Firmware Update Granularity: No Information Provided 00:28:54.658 Per-Namespace SMART Log: No 00:28:54.658 Asymmetric Namespace Access Log Page: Not Supported 00:28:54.658 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:54.658 Command Effects Log Page: Supported 00:28:54.658 Get Log Page Extended Data: Supported 00:28:54.658 Telemetry Log Pages: Not Supported 00:28:54.658 Persistent Event Log Pages: Not Supported 00:28:54.658 Supported Log Pages Log Page: May Support 00:28:54.658 Commands Supported & Effects Log Page: Not Supported 00:28:54.658 Feature Identifiers & Effects Log Page:May Support 00:28:54.658 NVMe-MI Commands & Effects Log Page: May Support 00:28:54.658 Data Area 4 for Telemetry Log: Not Supported 00:28:54.658 Error Log Page Entries Supported: 128 00:28:54.658 Keep Alive: Supported 00:28:54.658 Keep Alive Granularity: 10000 ms 00:28:54.658 00:28:54.658 NVM Command Set Attributes 00:28:54.658 ========================== 00:28:54.658 Submission Queue Entry Size 00:28:54.658 Max: 64 00:28:54.658 Min: 64 00:28:54.658 Completion Queue Entry Size 00:28:54.658 Max: 16 00:28:54.658 Min: 16 00:28:54.658 Number of Namespaces: 32 00:28:54.658 Compare Command: Supported 00:28:54.658 Write Uncorrectable Command: Not Supported 00:28:54.658 Dataset Management Command: Supported 00:28:54.658 Write Zeroes Command: Supported 00:28:54.658 Set Features Save Field: Not Supported 00:28:54.658 Reservations: Supported 00:28:54.658 Timestamp: Not Supported 00:28:54.658 Copy: Supported 00:28:54.658 Volatile Write Cache: Present 00:28:54.658 Atomic Write Unit (Normal): 1 00:28:54.658 Atomic Write Unit (PFail): 1 00:28:54.658 Atomic Compare & Write Unit: 1 00:28:54.658 Fused Compare & Write: Supported 00:28:54.658 Scatter-Gather List 00:28:54.658 SGL Command Set: Supported 00:28:54.658 SGL Keyed: Supported 00:28:54.658 SGL Bit Bucket Descriptor: Not Supported 00:28:54.658 SGL Metadata Pointer: Not Supported 00:28:54.658 Oversized SGL: Not Supported 00:28:54.658 SGL Metadata Address: Not Supported 00:28:54.658 SGL Offset: Supported 00:28:54.658 Transport SGL Data Block: Not Supported 00:28:54.658 Replay Protected Memory Block: Not Supported 00:28:54.658 00:28:54.658 Firmware Slot Information 00:28:54.658 ========================= 00:28:54.658 Active slot: 1 00:28:54.658 Slot 1 Firmware Revision: 24.05.1 00:28:54.658 00:28:54.658 00:28:54.658 Commands Supported and Effects 00:28:54.658 ============================== 00:28:54.658 Admin Commands 00:28:54.658 -------------- 00:28:54.658 Get Log Page (02h): Supported 00:28:54.658 Identify (06h): Supported 00:28:54.658 Abort (08h): Supported 00:28:54.658 Set Features (09h): Supported 00:28:54.658 Get Features (0Ah): Supported 00:28:54.658 Asynchronous Event Request (0Ch): Supported 00:28:54.658 Keep Alive (18h): Supported 00:28:54.658 I/O Commands 00:28:54.658 ------------ 00:28:54.658 Flush (00h): Supported LBA-Change 00:28:54.658 Write (01h): Supported LBA-Change 00:28:54.658 Read (02h): Supported 00:28:54.658 Compare (05h): Supported 00:28:54.658 Write Zeroes (08h): Supported LBA-Change 00:28:54.658 Dataset Management (09h): Supported LBA-Change 00:28:54.658 Copy (19h): Supported LBA-Change 00:28:54.658 Unknown (79h): Supported LBA-Change 00:28:54.658 Unknown (7Ah): Supported 00:28:54.658 00:28:54.658 Error Log 00:28:54.658 ========= 00:28:54.658 00:28:54.658 Arbitration 00:28:54.658 =========== 00:28:54.658 Arbitration Burst: 1 00:28:54.658 00:28:54.658 Power Management 00:28:54.658 ================ 00:28:54.658 Number of Power States: 1 00:28:54.658 Current Power State: Power State #0 00:28:54.658 Power State #0: 00:28:54.658 Max Power: 0.00 W 00:28:54.658 Non-Operational State: Operational 00:28:54.658 Entry Latency: Not Reported 00:28:54.658 Exit Latency: Not Reported 00:28:54.658 Relative Read Throughput: 0 00:28:54.658 Relative Read Latency: 0 00:28:54.658 Relative Write Throughput: 0 00:28:54.658 Relative Write Latency: 0 00:28:54.658 Idle Power: Not Reported 00:28:54.658 Active Power: Not Reported 00:28:54.658 Non-Operational Permissive Mode: Not Supported 00:28:54.658 00:28:54.658 Health Information 00:28:54.658 ================== 00:28:54.658 Critical Warnings: 00:28:54.658 Available Spare Space: OK 00:28:54.658 Temperature: OK 00:28:54.658 Device Reliability: OK 00:28:54.658 Read Only: No 00:28:54.658 Volatile Memory Backup: OK 00:28:54.658 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:54.658 Temperature Threshold: [2024-07-15 06:57:42.017093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1aaa980) 00:28:54.658 [2024-07-15 06:57:42.017117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.658 [2024-07-15 06:57:42.017140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b12e60, cid 7, qid 0 00:28:54.658 [2024-07-15 06:57:42.017291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.658 [2024-07-15 06:57:42.017303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.658 [2024-07-15 06:57:42.017313] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b12e60) on tqpair=0x1aaa980 00:28:54.658 [2024-07-15 06:57:42.017363] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:54.658 [2024-07-15 06:57:42.017384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.658 [2024-07-15 06:57:42.017396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.658 [2024-07-15 06:57:42.017406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.658 [2024-07-15 06:57:42.017415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.658 [2024-07-15 06:57:42.017428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017436] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017442] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.658 [2024-07-15 06:57:42.017467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.658 [2024-07-15 06:57:42.017489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.658 [2024-07-15 06:57:42.017657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.658 [2024-07-15 06:57:42.017673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.658 [2024-07-15 06:57:42.017680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.658 [2024-07-15 06:57:42.017699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.658 [2024-07-15 06:57:42.017713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.658 [2024-07-15 06:57:42.017724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.017749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.017869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.017891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.017899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.017906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.017915] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:54.659 [2024-07-15 06:57:42.017923] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:54.659 [2024-07-15 06:57:42.017938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.017947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.017954] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.017964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.017985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.018099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.018111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.018122] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.018147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018156] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018162] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.018172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.018192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.018296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.018311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.018318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018325] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.018342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.018368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.018388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.018492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.018507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.018514] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.018538] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018547] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.018564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.018584] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.018690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.018705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.018712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.018736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018745] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.018752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.018762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.018782] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.022892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.022909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.022916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.022926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.022945] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.022969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.022975] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aaa980) 00:28:54.659 [2024-07-15 06:57:42.022986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.659 [2024-07-15 06:57:42.023008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b128e0, cid 3, qid 0 00:28:54.659 [2024-07-15 06:57:42.023150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:54.659 [2024-07-15 06:57:42.023162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:54.659 [2024-07-15 06:57:42.023169] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:54.659 [2024-07-15 06:57:42.023176] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b128e0) on tqpair=0x1aaa980 00:28:54.659 [2024-07-15 06:57:42.023190] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:54.659 0 Kelvin (-273 Celsius) 00:28:54.659 Available Spare: 0% 00:28:54.659 Available Spare Threshold: 0% 00:28:54.659 Life Percentage Used: 0% 00:28:54.659 Data Units Read: 0 00:28:54.659 Data Units Written: 0 00:28:54.659 Host Read Commands: 0 00:28:54.659 Host Write Commands: 0 00:28:54.659 Controller Busy Time: 0 minutes 00:28:54.659 Power Cycles: 0 00:28:54.659 Power On Hours: 0 hours 00:28:54.659 Unsafe Shutdowns: 0 00:28:54.659 Unrecoverable Media Errors: 0 00:28:54.659 Lifetime Error Log Entries: 0 00:28:54.659 Warning Temperature Time: 0 minutes 00:28:54.659 Critical Temperature Time: 0 minutes 00:28:54.659 00:28:54.659 Number of Queues 00:28:54.659 ================ 00:28:54.659 Number of I/O Submission Queues: 127 00:28:54.659 Number of I/O Completion Queues: 127 00:28:54.659 00:28:54.659 Active Namespaces 00:28:54.659 ================= 00:28:54.659 Namespace ID:1 00:28:54.659 Error Recovery Timeout: Unlimited 00:28:54.659 Command Set Identifier: NVM (00h) 00:28:54.659 Deallocate: Supported 00:28:54.659 Deallocated/Unwritten Error: Not Supported 00:28:54.659 Deallocated Read Value: Unknown 00:28:54.659 Deallocate in Write Zeroes: Not Supported 00:28:54.659 Deallocated Guard Field: 0xFFFF 00:28:54.659 Flush: Supported 00:28:54.659 Reservation: Supported 00:28:54.659 Namespace Sharing Capabilities: Multiple Controllers 00:28:54.659 Size (in LBAs): 131072 (0GiB) 00:28:54.659 Capacity (in LBAs): 131072 (0GiB) 00:28:54.659 Utilization (in LBAs): 131072 (0GiB) 00:28:54.659 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:54.659 EUI64: ABCDEF0123456789 00:28:54.660 UUID: 1ca1f77c-a2fc-49dc-b95c-bb38a62ba005 00:28:54.660 Thin Provisioning: Not Supported 00:28:54.660 Per-NS Atomic Units: Yes 00:28:54.660 Atomic Boundary Size (Normal): 0 00:28:54.660 Atomic Boundary Size (PFail): 0 00:28:54.660 Atomic Boundary Offset: 0 00:28:54.660 Maximum Single Source Range Length: 65535 00:28:54.660 Maximum Copy Length: 65535 00:28:54.660 Maximum Source Range Count: 1 00:28:54.660 NGUID/EUI64 Never Reused: No 00:28:54.660 Namespace Write Protected: No 00:28:54.660 Number of LBA Formats: 1 00:28:54.660 Current LBA Format: LBA Format #00 00:28:54.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:54.660 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:54.660 rmmod nvme_tcp 00:28:54.660 rmmod nvme_fabrics 00:28:54.660 rmmod nvme_keyring 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 737394 ']' 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 737394 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 737394 ']' 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 737394 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 737394 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 737394' 00:28:54.660 killing process with pid 737394 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 737394 00:28:54.660 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 737394 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.919 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.920 06:57:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.822 06:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:57.080 00:28:57.080 real 0m5.332s 00:28:57.080 user 0m4.161s 00:28:57.080 sys 0m1.873s 00:28:57.080 06:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:57.080 06:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.080 ************************************ 00:28:57.080 END TEST nvmf_identify 00:28:57.080 ************************************ 00:28:57.080 06:57:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:57.080 06:57:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:57.080 06:57:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:57.080 06:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:57.080 ************************************ 00:28:57.080 START TEST nvmf_perf 00:28:57.080 ************************************ 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:57.080 * Looking for test storage... 00:28:57.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.080 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:57.081 06:57:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.986 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:58.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:58.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:58.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:58.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:58.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:28:58.987 00:28:58.987 --- 10.0.0.2 ping statistics --- 00:28:58.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.987 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:58.987 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:59.246 00:28:59.246 --- 10.0.0.1 ping statistics --- 00:28:59.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.246 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=739431 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 739431 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 739431 ']' 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:59.246 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:59.246 [2024-07-15 06:57:46.673688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:59.246 [2024-07-15 06:57:46.673769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.246 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.246 [2024-07-15 06:57:46.742325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.246 [2024-07-15 06:57:46.833283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.246 [2024-07-15 06:57:46.833345] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.246 [2024-07-15 06:57:46.833361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.246 [2024-07-15 06:57:46.833375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.246 [2024-07-15 06:57:46.833388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.246 [2024-07-15 06:57:46.833470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.246 [2024-07-15 06:57:46.833539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.246 [2024-07-15 06:57:46.833632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.246 [2024-07-15 06:57:46.833634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:59.504 06:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:02.783 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:02.783 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:02.783 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:02.783 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.041 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:03.041 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:03.041 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:03.041 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:03.041 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:03.298 [2024-07-15 06:57:50.827386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.298 06:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.555 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:03.555 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.811 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:03.811 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:04.068 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.325 [2024-07-15 06:57:51.835071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.325 06:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.583 06:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:04.583 06:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:04.583 06:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:04.583 06:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:05.951 Initializing NVMe Controllers 00:29:05.951 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:05.951 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:05.951 Initialization complete. Launching workers. 00:29:05.951 ======================================================== 00:29:05.951 Latency(us) 00:29:05.951 Device Information : IOPS MiB/s Average min max 00:29:05.951 PCIE (0000:88:00.0) NSID 1 from core 0: 85714.68 334.82 372.78 16.02 4646.10 00:29:05.951 ======================================================== 00:29:05.951 Total : 85714.68 334.82 372.78 16.02 4646.10 00:29:05.951 00:29:05.951 06:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.951 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.320 Initializing NVMe Controllers 00:29:07.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:07.320 Initialization complete. Launching workers. 00:29:07.320 ======================================================== 00:29:07.320 Latency(us) 00:29:07.320 Device Information : IOPS MiB/s Average min max 00:29:07.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.00 0.33 11983.48 169.66 45729.72 00:29:07.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16483.19 7936.12 47903.01 00:29:07.320 ======================================================== 00:29:07.320 Total : 145.00 0.57 13876.46 169.66 47903.01 00:29:07.320 00:29:07.320 06:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.320 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.248 Initializing NVMe Controllers 00:29:08.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:08.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:08.248 Initialization complete. Launching workers. 00:29:08.248 ======================================================== 00:29:08.248 Latency(us) 00:29:08.248 Device Information : IOPS MiB/s Average min max 00:29:08.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8504.92 33.22 3776.21 438.50 9043.78 00:29:08.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3806.97 14.87 8443.22 5372.70 16752.18 00:29:08.248 ======================================================== 00:29:08.248 Total : 12311.89 48.09 5219.30 438.50 16752.18 00:29:08.248 00:29:08.503 06:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:08.503 06:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:08.503 06:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.503 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.064 Initializing NVMe Controllers 00:29:11.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.064 Controller IO queue size 128, less than required. 00:29:11.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.064 Controller IO queue size 128, less than required. 00:29:11.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:11.064 Initialization complete. Launching workers. 00:29:11.064 ======================================================== 00:29:11.064 Latency(us) 00:29:11.065 Device Information : IOPS MiB/s Average min max 00:29:11.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1117.24 279.31 117324.45 73997.98 158610.77 00:29:11.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.59 144.40 224789.89 85044.76 342886.48 00:29:11.065 ======================================================== 00:29:11.065 Total : 1694.83 423.71 153948.17 73997.98 342886.48 00:29:11.065 00:29:11.065 06:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:11.065 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.345 No valid NVMe controllers or AIO or URING devices found 00:29:11.345 Initializing NVMe Controllers 00:29:11.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.345 Controller IO queue size 128, less than required. 00:29:11.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.345 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:11.345 Controller IO queue size 128, less than required. 00:29:11.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.345 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:11.345 WARNING: Some requested NVMe devices were skipped 00:29:11.345 06:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:11.345 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.872 Initializing NVMe Controllers 00:29:13.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.872 Controller IO queue size 128, less than required. 00:29:13.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.872 Controller IO queue size 128, less than required. 00:29:13.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:13.872 Initialization complete. Launching workers. 00:29:13.872 00:29:13.872 ==================== 00:29:13.872 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:13.872 TCP transport: 00:29:13.872 polls: 13665 00:29:13.872 idle_polls: 5814 00:29:13.872 sock_completions: 7851 00:29:13.872 nvme_completions: 5253 00:29:13.872 submitted_requests: 7866 00:29:13.872 queued_requests: 1 00:29:13.872 00:29:13.872 ==================== 00:29:13.872 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:13.872 TCP transport: 00:29:13.872 polls: 16685 00:29:13.872 idle_polls: 8968 00:29:13.872 sock_completions: 7717 00:29:13.872 nvme_completions: 5451 00:29:13.872 submitted_requests: 8300 00:29:13.872 queued_requests: 1 00:29:13.872 ======================================================== 00:29:13.872 Latency(us) 00:29:13.872 Device Information : IOPS MiB/s Average min max 00:29:13.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1312.94 328.23 100284.34 60954.36 152157.37 00:29:13.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1362.44 340.61 94729.78 45467.36 134323.17 00:29:13.872 ======================================================== 00:29:13.872 Total : 2675.37 668.84 97455.68 45467.36 152157.37 00:29:13.872 00:29:13.872 06:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:13.872 06:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.129 06:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:14.129 06:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:14.129 06:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:17.402 { 00:29:17.402 "uuid": "c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa", 00:29:17.402 "name": "lvs_0", 00:29:17.402 "base_bdev": "Nvme0n1", 00:29:17.402 "total_data_clusters": 238234, 00:29:17.402 "free_clusters": 238234, 00:29:17.402 "block_size": 512, 00:29:17.402 "cluster_size": 4194304 00:29:17.402 } 00:29:17.402 ]' 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa") .free_clusters' 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:29:17.402 06:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa") .cluster_size' 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:29:17.659 952936 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:17.659 06:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa lbd_0 20480 00:29:17.916 06:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2343c449-90fd-4fad-85fc-c1c4b5635fbf 00:29:17.916 06:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2343c449-90fd-4fad-85fc-c1c4b5635fbf lvs_n_0 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=eea10264-bc0f-447d-a1c4-b774f59ec462 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb eea10264-bc0f-447d-a1c4-b774f59ec462 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=eea10264-bc0f-447d-a1c4-b774f59ec462 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:18.849 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:19.107 { 00:29:19.107 "uuid": "c4f9ce82-c6cb-47df-bb9f-3dd297c7f7aa", 00:29:19.107 "name": "lvs_0", 00:29:19.107 "base_bdev": "Nvme0n1", 00:29:19.107 "total_data_clusters": 238234, 00:29:19.107 "free_clusters": 233114, 00:29:19.107 "block_size": 512, 00:29:19.107 "cluster_size": 4194304 00:29:19.107 }, 00:29:19.107 { 00:29:19.107 "uuid": "eea10264-bc0f-447d-a1c4-b774f59ec462", 00:29:19.107 "name": "lvs_n_0", 00:29:19.107 "base_bdev": "2343c449-90fd-4fad-85fc-c1c4b5635fbf", 00:29:19.107 "total_data_clusters": 5114, 00:29:19.107 "free_clusters": 5114, 00:29:19.107 "block_size": 512, 00:29:19.107 "cluster_size": 4194304 00:29:19.107 } 00:29:19.107 ]' 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="eea10264-bc0f-447d-a1c4-b774f59ec462") .free_clusters' 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="eea10264-bc0f-447d-a1c4-b774f59ec462") .cluster_size' 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:29:19.107 20456 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:19.107 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eea10264-bc0f-447d-a1c4-b774f59ec462 lbd_nest_0 20456 00:29:19.365 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1125410f-13ab-47f3-aeda-e4ea90715905 00:29:19.365 06:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.622 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:19.622 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1125410f-13ab-47f3-aeda-e4ea90715905 00:29:19.880 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.138 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:20.138 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:20.138 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:20.138 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:20.138 06:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.138 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.321 Initializing NVMe Controllers 00:29:32.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.321 Initialization complete. Launching workers. 00:29:32.321 ======================================================== 00:29:32.321 Latency(us) 00:29:32.321 Device Information : IOPS MiB/s Average min max 00:29:32.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.30 0.02 21154.86 201.20 45798.78 00:29:32.321 ======================================================== 00:29:32.321 Total : 47.30 0.02 21154.86 201.20 45798.78 00:29:32.321 00:29:32.321 06:58:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:32.321 06:58:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.321 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.333 Initializing NVMe Controllers 00:29:42.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.333 Initialization complete. Launching workers. 00:29:42.333 ======================================================== 00:29:42.333 Latency(us) 00:29:42.333 Device Information : IOPS MiB/s Average min max 00:29:42.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.08 10.26 12182.80 5004.68 47869.62 00:29:42.333 ======================================================== 00:29:42.333 Total : 82.08 10.26 12182.80 5004.68 47869.62 00:29:42.333 00:29:42.333 06:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:42.333 06:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:42.333 06:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.334 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.295 Initializing NVMe Controllers 00:29:52.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.295 Initialization complete. Launching workers. 00:29:52.295 ======================================================== 00:29:52.295 Latency(us) 00:29:52.295 Device Information : IOPS MiB/s Average min max 00:29:52.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7498.20 3.66 4267.49 288.53 11869.65 00:29:52.295 ======================================================== 00:29:52.295 Total : 7498.20 3.66 4267.49 288.53 11869.65 00:29:52.295 00:29:52.295 06:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.295 06:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.295 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.253 Initializing NVMe Controllers 00:30:02.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.253 Initialization complete. Launching workers. 00:30:02.253 ======================================================== 00:30:02.253 Latency(us) 00:30:02.253 Device Information : IOPS MiB/s Average min max 00:30:02.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2613.11 326.64 12246.28 1093.03 29077.59 00:30:02.253 ======================================================== 00:30:02.253 Total : 2613.11 326.64 12246.28 1093.03 29077.59 00:30:02.253 00:30:02.253 06:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:02.253 06:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:02.253 06:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.253 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.215 Initializing NVMe Controllers 00:30:12.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.215 Controller IO queue size 128, less than required. 00:30:12.215 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.215 Initialization complete. Launching workers. 00:30:12.215 ======================================================== 00:30:12.215 Latency(us) 00:30:12.215 Device Information : IOPS MiB/s Average min max 00:30:12.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11923.97 5.82 10740.12 1817.19 24338.19 00:30:12.216 ======================================================== 00:30:12.216 Total : 11923.97 5.82 10740.12 1817.19 24338.19 00:30:12.216 00:30:12.216 06:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:12.216 06:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.216 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.198 Initializing NVMe Controllers 00:30:22.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.198 Controller IO queue size 128, less than required. 00:30:22.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:22.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:22.198 Initialization complete. Launching workers. 00:30:22.198 ======================================================== 00:30:22.198 Latency(us) 00:30:22.198 Device Information : IOPS MiB/s Average min max 00:30:22.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1215.60 151.95 106162.69 23195.37 208268.76 00:30:22.198 ======================================================== 00:30:22.198 Total : 1215.60 151.95 106162.69 23195.37 208268.76 00:30:22.198 00:30:22.198 06:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.456 06:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1125410f-13ab-47f3-aeda-e4ea90715905 00:30:23.388 06:59:10 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:23.646 06:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2343c449-90fd-4fad-85fc-c1c4b5635fbf 00:30:23.903 06:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.161 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.161 rmmod nvme_tcp 00:30:24.161 rmmod nvme_fabrics 00:30:24.162 rmmod nvme_keyring 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 739431 ']' 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 739431 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 739431 ']' 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 739431 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 739431 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 739431' 00:30:24.162 killing process with pid 739431 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 739431 00:30:24.162 06:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 739431 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.062 06:59:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.965 06:59:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.965 00:30:27.965 real 1m30.859s 00:30:27.965 user 5m32.696s 00:30:27.965 sys 0m16.307s 00:30:27.965 06:59:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:27.965 06:59:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.965 ************************************ 00:30:27.966 END TEST nvmf_perf 00:30:27.966 ************************************ 00:30:27.966 06:59:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:27.966 06:59:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:27.966 06:59:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.966 06:59:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.966 ************************************ 00:30:27.966 START TEST nvmf_fio_host 00:30:27.966 ************************************ 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:27.966 * Looking for test storage... 00:30:27.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.966 06:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:29.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:29.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.868 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:29.869 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:29.869 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:30:29.869 00:30:29.869 --- 10.0.0.2 ping statistics --- 00:30:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.869 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:30:29.869 00:30:29.869 --- 10.0.0.1 ping statistics --- 00:30:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.869 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=751406 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 751406 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 751406 ']' 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:29.869 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.127 [2024-07-15 06:59:17.495142] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:30.127 [2024-07-15 06:59:17.495248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.127 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.127 [2024-07-15 06:59:17.560803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.127 [2024-07-15 06:59:17.651051] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.127 [2024-07-15 06:59:17.651127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.127 [2024-07-15 06:59:17.651142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.127 [2024-07-15 06:59:17.651153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.127 [2024-07-15 06:59:17.651164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.127 [2024-07-15 06:59:17.651285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.127 [2024-07-15 06:59:17.651363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.127 [2024-07-15 06:59:17.651383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.127 [2024-07-15 06:59:17.651385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.384 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:30.384 06:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:30.384 06:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:30.642 [2024-07-15 06:59:18.044310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.642 06:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:30.642 06:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.642 06:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.642 06:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:30.899 Malloc1 00:30:30.899 06:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.157 06:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:31.415 06:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.673 [2024-07-15 06:59:19.152260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.673 06:59:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:31.931 06:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.189 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:32.189 fio-3.35 00:30:32.189 Starting 1 thread 00:30:32.189 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.715 [2024-07-15 06:59:21.938082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563e60 is same with the state(5) to be set 00:30:34.715 [2024-07-15 06:59:21.938175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563e60 is same with the state(5) to be set 00:30:34.715 [2024-07-15 06:59:21.938192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563e60 is same with the state(5) to be set 00:30:34.715 [2024-07-15 06:59:21.938204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563e60 is same with the state(5) to be set 00:30:34.715 00:30:34.715 test: (groupid=0, jobs=1): err= 0: pid=751765: Mon Jul 15 06:59:21 2024 00:30:34.715 read: IOPS=9235, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec) 00:30:34.715 slat (nsec): min=1988, max=154940, avg=2767.83, stdev=2088.58 00:30:34.715 clat (usec): min=2406, max=13013, avg=7640.17, stdev=601.22 00:30:34.715 lat (usec): min=2427, max=13015, avg=7642.94, stdev=601.12 00:30:34.715 clat percentiles (usec): 00:30:34.715 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:30:34.715 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:30:34.715 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:34.715 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[12256], 00:30:34.715 | 99.99th=[13042] 00:30:34.715 bw ( KiB/s): min=35584, max=37616, per=99.93%, avg=36916.00, stdev=906.26, samples=4 00:30:34.715 iops : min= 8896, max= 9404, avg=9229.00, stdev=226.57, samples=4 00:30:34.715 write: IOPS=9239, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec); 0 zone resets 00:30:34.715 slat (usec): min=2, max=179, avg= 2.86, stdev= 1.94 00:30:34.715 clat (usec): min=1370, max=11566, avg=6175.03, stdev=518.71 00:30:34.715 lat (usec): min=1377, max=11568, avg=6177.89, stdev=518.66 00:30:34.715 clat percentiles (usec): 00:30:34.715 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:30:34.715 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:30:34.715 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:30:34.715 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 9503], 99.95th=[10552], 00:30:34.715 | 99.99th=[11600] 00:30:34.715 bw ( KiB/s): min=36368, max=37312, per=99.99%, avg=36954.00, stdev=422.99, samples=4 00:30:34.715 iops : min= 9092, max= 9328, avg=9238.50, stdev=105.75, samples=4 00:30:34.715 lat (msec) : 2=0.02%, 4=0.12%, 10=99.74%, 20=0.13% 00:30:34.715 cpu : usr=60.50%, sys=34.71%, ctx=45, majf=0, minf=6 00:30:34.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:34.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:34.715 issued rwts: total=18527,18535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:34.715 00:30:34.715 Run status group 0 (all jobs): 00:30:34.715 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:30:34.715 WRITE: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:34.715 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:34.716 06:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:34.716 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:34.716 fio-3.35 00:30:34.716 Starting 1 thread 00:30:34.716 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.240 00:30:37.240 test: (groupid=0, jobs=1): err= 0: pid=752208: Mon Jul 15 06:59:24 2024 00:30:37.240 read: IOPS=8020, BW=125MiB/s (131MB/s)(252MiB/2007msec) 00:30:37.240 slat (nsec): min=2915, max=95673, avg=3603.52, stdev=1668.08 00:30:37.240 clat (usec): min=2156, max=17971, avg=9292.26, stdev=2197.30 00:30:37.240 lat (usec): min=2160, max=17975, avg=9295.86, stdev=2197.32 00:30:37.240 clat percentiles (usec): 00:30:37.240 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7439], 00:30:37.240 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:30:37.240 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12256], 95.00th=[13173], 00:30:37.240 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:30:37.240 | 99.99th=[17695] 00:30:37.240 bw ( KiB/s): min=60256, max=78912, per=52.66%, avg=67576.00, stdev=7964.87, samples=4 00:30:37.240 iops : min= 3766, max= 4932, avg=4223.50, stdev=497.80, samples=4 00:30:37.240 write: IOPS=4800, BW=75.0MiB/s (78.7MB/s)(138MiB/1833msec); 0 zone resets 00:30:37.240 slat (usec): min=30, max=156, avg=33.63, stdev= 5.00 00:30:37.240 clat (usec): min=4352, max=22280, avg=11503.62, stdev=2114.43 00:30:37.240 lat (usec): min=4385, max=22312, avg=11537.24, stdev=2114.27 00:30:37.240 clat percentiles (usec): 00:30:37.240 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:30:37.240 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:30:37.240 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14484], 95.00th=[15533], 00:30:37.240 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19006], 99.95th=[19006], 00:30:37.240 | 99.99th=[22152] 00:30:37.240 bw ( KiB/s): min=62432, max=81152, per=91.17%, avg=70032.00, stdev=7912.76, samples=4 00:30:37.240 iops : min= 3902, max= 5072, avg=4377.00, stdev=494.55, samples=4 00:30:37.240 lat (msec) : 4=0.15%, 10=51.88%, 20=47.96%, 50=0.01% 00:30:37.240 cpu : usr=74.08%, sys=22.58%, ctx=36, majf=0, minf=2 00:30:37.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:30:37.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:37.240 issued rwts: total=16097,8800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:37.240 00:30:37.240 Run status group 0 (all jobs): 00:30:37.240 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=252MiB (264MB), run=2007-2007msec 00:30:37.240 WRITE: bw=75.0MiB/s (78.7MB/s), 75.0MiB/s-75.0MiB/s (78.7MB/s-78.7MB/s), io=138MiB (144MB), run=1833-1833msec 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:37.240 06:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:40.612 Nvme0n1 00:30:40.612 06:59:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=8d0b2ab6-821b-47b6-8cf6-323eabf825f0 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 8d0b2ab6-821b-47b6-8cf6-323eabf825f0 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=8d0b2ab6-821b-47b6-8cf6-323eabf825f0 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:43.897 06:59:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:43.897 { 00:30:43.897 "uuid": "8d0b2ab6-821b-47b6-8cf6-323eabf825f0", 00:30:43.897 "name": "lvs_0", 00:30:43.897 "base_bdev": "Nvme0n1", 00:30:43.897 "total_data_clusters": 930, 00:30:43.897 "free_clusters": 930, 00:30:43.897 "block_size": 512, 00:30:43.897 "cluster_size": 1073741824 00:30:43.897 } 00:30:43.897 ]' 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="8d0b2ab6-821b-47b6-8cf6-323eabf825f0") .free_clusters' 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="8d0b2ab6-821b-47b6-8cf6-323eabf825f0") .cluster_size' 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:43.897 952320 00:30:43.897 06:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:44.156 d8da63a6-36f7-4c0c-aed6-1264b06cb952 00:30:44.156 06:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:44.156 06:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:44.414 06:59:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:44.672 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:44.673 06:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.930 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:44.930 fio-3.35 00:30:44.930 Starting 1 thread 00:30:44.930 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.492 00:30:47.492 test: (groupid=0, jobs=1): err= 0: pid=753497: Mon Jul 15 06:59:34 2024 00:30:47.492 read: IOPS=6075, BW=23.7MiB/s (24.9MB/s)(47.7MiB/2008msec) 00:30:47.492 slat (usec): min=2, max=134, avg= 2.66, stdev= 2.11 00:30:47.493 clat (usec): min=813, max=171224, avg=11601.05, stdev=11561.79 00:30:47.493 lat (usec): min=816, max=171261, avg=11603.71, stdev=11562.02 00:30:47.493 clat percentiles (msec): 00:30:47.493 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:47.493 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:47.493 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:47.493 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:47.493 | 99.99th=[ 171] 00:30:47.493 bw ( KiB/s): min=17024, max=26816, per=99.82%, avg=24256.00, stdev=4822.85, samples=4 00:30:47.493 iops : min= 4256, max= 6704, avg=6064.00, stdev=1205.71, samples=4 00:30:47.493 write: IOPS=6054, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2008msec); 0 zone resets 00:30:47.493 slat (usec): min=2, max=111, avg= 2.65, stdev= 1.63 00:30:47.493 clat (usec): min=370, max=169332, avg=9344.63, stdev=10872.65 00:30:47.493 lat (usec): min=372, max=169337, avg=9347.28, stdev=10872.89 00:30:47.493 clat percentiles (msec): 00:30:47.493 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:47.493 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:47.493 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:47.493 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:47.493 | 99.99th=[ 169] 00:30:47.493 bw ( KiB/s): min=18024, max=26448, per=99.94%, avg=24204.00, stdev=4122.34, samples=4 00:30:47.493 iops : min= 4506, max= 6612, avg=6051.00, stdev=1030.59, samples=4 00:30:47.493 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:47.493 lat (msec) : 2=0.03%, 4=0.14%, 10=57.61%, 20=41.68%, 250=0.53% 00:30:47.493 cpu : usr=58.35%, sys=38.32%, ctx=97, majf=0, minf=24 00:30:47.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:47.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:47.493 issued rwts: total=12199,12158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:47.493 00:30:47.493 Run status group 0 (all jobs): 00:30:47.493 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2008-2008msec 00:30:47.493 WRITE: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2008-2008msec 00:30:47.493 06:59:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:47.753 06:59:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:48.690 06:59:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=bb988251-d6c4-4046-81ff-3a894610599c 00:30:48.690 06:59:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb bb988251-d6c4-4046-81ff-3a894610599c 00:30:48.690 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=bb988251-d6c4-4046-81ff-3a894610599c 00:30:48.691 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:48.691 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:48.691 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:48.691 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:48.949 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:48.949 { 00:30:48.949 "uuid": "8d0b2ab6-821b-47b6-8cf6-323eabf825f0", 00:30:48.949 "name": "lvs_0", 00:30:48.949 "base_bdev": "Nvme0n1", 00:30:48.949 "total_data_clusters": 930, 00:30:48.949 "free_clusters": 0, 00:30:48.949 "block_size": 512, 00:30:48.949 "cluster_size": 1073741824 00:30:48.949 }, 00:30:48.949 { 00:30:48.949 "uuid": "bb988251-d6c4-4046-81ff-3a894610599c", 00:30:48.949 "name": "lvs_n_0", 00:30:48.949 "base_bdev": "d8da63a6-36f7-4c0c-aed6-1264b06cb952", 00:30:48.949 "total_data_clusters": 237847, 00:30:48.949 "free_clusters": 237847, 00:30:48.949 "block_size": 512, 00:30:48.949 "cluster_size": 4194304 00:30:48.949 } 00:30:48.949 ]' 00:30:48.949 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="bb988251-d6c4-4046-81ff-3a894610599c") .free_clusters' 00:30:48.949 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:48.949 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="bb988251-d6c4-4046-81ff-3a894610599c") .cluster_size' 00:30:49.207 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:49.207 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:49.207 06:59:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:49.207 951388 00:30:49.207 06:59:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:49.774 31c5edee-eb03-4812-aad5-d3d03944dfee 00:30:49.774 06:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:50.032 06:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:50.290 06:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:50.548 06:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:50.548 06:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:50.806 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:50.806 fio-3.35 00:30:50.806 Starting 1 thread 00:30:50.806 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.332 00:30:53.332 test: (groupid=0, jobs=1): err= 0: pid=754226: Mon Jul 15 06:59:40 2024 00:30:53.332 read: IOPS=5766, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2008msec) 00:30:53.332 slat (nsec): min=1912, max=116848, avg=2454.59, stdev=1777.50 00:30:53.332 clat (usec): min=4365, max=20011, avg=12250.50, stdev=1058.54 00:30:53.332 lat (usec): min=4370, max=20013, avg=12252.96, stdev=1058.47 00:30:53.332 clat percentiles (usec): 00:30:53.332 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:30:53.332 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:30:53.332 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13960], 00:30:53.333 | 99.00th=[14615], 99.50th=[15139], 99.90th=[19006], 99.95th=[19792], 00:30:53.333 | 99.99th=[20055] 00:30:53.333 bw ( KiB/s): min=21744, max=23640, per=99.75%, avg=23008.00, stdev=855.89, samples=4 00:30:53.333 iops : min= 5436, max= 5910, avg=5752.00, stdev=213.97, samples=4 00:30:53.333 write: IOPS=5753, BW=22.5MiB/s (23.6MB/s)(45.1MiB/2008msec); 0 zone resets 00:30:53.333 slat (usec): min=2, max=104, avg= 2.56, stdev= 1.47 00:30:53.333 clat (usec): min=2118, max=18680, avg=9795.05, stdev=922.41 00:30:53.333 lat (usec): min=2124, max=18683, avg=9797.61, stdev=922.36 00:30:53.333 clat percentiles (usec): 00:30:53.333 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:53.333 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:53.333 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:30:53.333 | 99.00th=[11863], 99.50th=[12125], 99.90th=[16909], 99.95th=[17695], 00:30:53.333 | 99.99th=[17957] 00:30:53.333 bw ( KiB/s): min=22808, max=23232, per=99.95%, avg=23002.00, stdev=183.99, samples=4 00:30:53.333 iops : min= 5702, max= 5808, avg=5750.50, stdev=46.00, samples=4 00:30:53.333 lat (msec) : 4=0.05%, 10=30.96%, 20=68.99%, 50=0.01% 00:30:53.333 cpu : usr=58.20%, sys=38.52%, ctx=92, majf=0, minf=24 00:30:53.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:53.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.333 issued rwts: total=11579,11553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.333 00:30:53.333 Run status group 0 (all jobs): 00:30:53.333 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2008-2008msec 00:30:53.333 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.1MiB (47.3MB), run=2008-2008msec 00:30:53.333 06:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:53.333 06:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:53.333 06:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:57.519 06:59:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:57.519 06:59:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:00.806 06:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:00.806 06:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.711 rmmod nvme_tcp 00:31:02.711 rmmod nvme_fabrics 00:31:02.711 rmmod nvme_keyring 00:31:02.711 06:59:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 751406 ']' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 751406 ']' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 751406' 00:31:02.711 killing process with pid 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 751406 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.711 06:59:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.251 06:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.251 00:31:05.251 real 0m36.896s 00:31:05.251 user 2m22.056s 00:31:05.251 sys 0m6.681s 00:31:05.251 06:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:05.251 06:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.251 ************************************ 00:31:05.251 END TEST nvmf_fio_host 00:31:05.251 ************************************ 00:31:05.252 06:59:52 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:05.252 06:59:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:05.252 06:59:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:05.252 06:59:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.252 ************************************ 00:31:05.252 START TEST nvmf_failover 00:31:05.252 ************************************ 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:05.252 * Looking for test storage... 00:31:05.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.252 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.253 06:59:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:07.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:07.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:07.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:07.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.152 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:07.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:31:07.153 00:31:07.153 --- 10.0.0.2 ping statistics --- 00:31:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.153 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:07.153 00:31:07.153 --- 10.0.0.1 ping statistics --- 00:31:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.153 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=757469 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 757469 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 757469 ']' 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:07.153 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.153 [2024-07-15 06:59:54.609680] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:07.153 [2024-07-15 06:59:54.609764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.153 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.153 [2024-07-15 06:59:54.688967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:07.411 [2024-07-15 06:59:54.781231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.411 [2024-07-15 06:59:54.781286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.411 [2024-07-15 06:59:54.781301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.411 [2024-07-15 06:59:54.781314] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.411 [2024-07-15 06:59:54.781326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.411 [2024-07-15 06:59:54.781410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.411 [2024-07-15 06:59:54.781533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.411 [2024-07-15 06:59:54.781536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.411 06:59:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:07.669 [2024-07-15 06:59:55.140027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.669 06:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:07.927 Malloc0 00:31:07.927 06:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.185 06:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.444 06:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.702 [2024-07-15 06:59:56.158408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.702 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.960 [2024-07-15 06:59:56.411169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:08.960 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:09.218 [2024-07-15 06:59:56.651981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=757756 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 757756 /var/tmp/bdevperf.sock 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 757756 ']' 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:09.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:09.218 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:09.476 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:09.476 06:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:09.476 06:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:09.735 NVMe0n1 00:31:09.735 06:59:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:10.306 00:31:10.306 06:59:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=757888 00:31:10.306 06:59:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:10.306 06:59:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:11.244 06:59:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.501 [2024-07-15 06:59:58.887836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.887999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 [2024-07-15 06:59:58.888011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d50 is same with the state(5) to be set 00:31:11.501 06:59:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:14.812 07:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.812 00:31:14.812 07:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.070 [2024-07-15 07:00:02.654249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 [2024-07-15 07:00:02.654894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bd0 is same with the state(5) to be set 00:31:15.070 07:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:18.402 07:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.402 [2024-07-15 07:00:05.916671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.402 07:00:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:19.334 07:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:19.591 [2024-07-15 07:00:07.172295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 [2024-07-15 07:00:07.172447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8750 is same with the state(5) to be set 00:31:19.591 07:00:07 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 757888 00:31:26.159 0 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 757756 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 757756 ']' 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 757756 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 757756 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 757756' 00:31:26.159 killing process with pid 757756 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 757756 00:31:26.159 07:00:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 757756 00:31:26.159 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:26.159 [2024-07-15 06:59:56.714839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:26.159 [2024-07-15 06:59:56.714941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757756 ] 00:31:26.159 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.159 [2024-07-15 06:59:56.776231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.159 [2024-07-15 06:59:56.864276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.159 Running I/O for 15 seconds... 00:31:26.159 [2024-07-15 06:59:58.889060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.159 [2024-07-15 06:59:58.889340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.159 [2024-07-15 06:59:58.889355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.160 [2024-07-15 06:59:58.889793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.889986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.889999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.160 [2024-07-15 06:59:58.890294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.160 [2024-07-15 06:59:58.890307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.161 [2024-07-15 06:59:58.891221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.161 [2024-07-15 06:59:58.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.891968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.891983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.892000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.892015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.892029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.892044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.892057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.892073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.892086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.892101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.162 [2024-07-15 06:59:58.892115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.162 [2024-07-15 06:59:58.892146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.162 [2024-07-15 06:59:58.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68728 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68736 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68744 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68752 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68760 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68768 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68776 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68784 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68792 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68800 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68808 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68816 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68824 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68832 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68840 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68848 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68856 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.892965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.892975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.892986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68864 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.892999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.893011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.893022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.893033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68872 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.163 [2024-07-15 06:59:58.893058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.163 [2024-07-15 06:59:58.893068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.163 [2024-07-15 06:59:58.893079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68880 len:8 PRP1 0x0 PRP2 0x0 00:31:26.163 [2024-07-15 06:59:58.893092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.164 [2024-07-15 06:59:58.893115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.164 [2024-07-15 06:59:58.893129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68888 len:8 PRP1 0x0 PRP2 0x0 00:31:26.164 [2024-07-15 06:59:58.893142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.164 [2024-07-15 06:59:58.893166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.164 [2024-07-15 06:59:58.893177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68896 len:8 PRP1 0x0 PRP2 0x0 00:31:26.164 [2024-07-15 06:59:58.893190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.164 [2024-07-15 06:59:58.893213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.164 [2024-07-15 06:59:58.893224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:31:26.164 [2024-07-15 06:59:58.893237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.164 [2024-07-15 06:59:58.893260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.164 [2024-07-15 06:59:58.893271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:31:26.164 [2024-07-15 06:59:58.893284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893340] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x247bb50 was disconnected and freed. reset controller. 00:31:26.164 [2024-07-15 06:59:58.893357] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:26.164 [2024-07-15 06:59:58.893390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.164 [2024-07-15 06:59:58.893408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.164 [2024-07-15 06:59:58.893436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.164 [2024-07-15 06:59:58.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.164 [2024-07-15 06:59:58.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 06:59:58.893503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:26.164 [2024-07-15 06:59:58.893560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245ceb0 (9): Bad file descriptor 00:31:26.164 [2024-07-15 06:59:58.896779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:26.164 [2024-07-15 06:59:59.013798] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:26.164 [2024-07-15 07:00:02.655058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.164 [2024-07-15 07:00:02.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.164 [2024-07-15 07:00:02.655584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.165 [2024-07-15 07:00:02.655851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.165 [2024-07-15 07:00:02.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.165 [2024-07-15 07:00:02.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.165 [2024-07-15 07:00:02.655943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.165 [2024-07-15 07:00:02.655971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.655985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.655998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.165 [2024-07-15 07:00:02.656350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.165 [2024-07-15 07:00:02.656363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.166 [2024-07-15 07:00:02.656390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.166 [2024-07-15 07:00:02.656417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.166 [2024-07-15 07:00:02.656445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.166 [2024-07-15 07:00:02.656473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.166 [2024-07-15 07:00:02.656501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.656979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.656994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.166 [2024-07-15 07:00:02.657333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.166 [2024-07-15 07:00:02.657347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.167 [2024-07-15 07:00:02.657916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.657968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92856 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.657981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.657999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92864 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92872 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92888 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92904 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.167 [2024-07-15 07:00:02.658308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:31:26.167 [2024-07-15 07:00:02.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.167 [2024-07-15 07:00:02.658340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.167 [2024-07-15 07:00:02.658351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.658953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.658965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.658978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.658989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93056 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93064 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.168 [2024-07-15 07:00:02.659274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.168 [2024-07-15 07:00:02.659285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.168 [2024-07-15 07:00:02.659295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93072 len:8 PRP1 0x0 PRP2 0x0 00:31:26.168 [2024-07-15 07:00:02.659308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.169 [2024-07-15 07:00:02.659331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.169 [2024-07-15 07:00:02.659342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93080 len:8 PRP1 0x0 PRP2 0x0 00:31:26.169 [2024-07-15 07:00:02.659355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.169 [2024-07-15 07:00:02.659379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.169 [2024-07-15 07:00:02.659390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93088 len:8 PRP1 0x0 PRP2 0x0 00:31:26.169 [2024-07-15 07:00:02.659402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.169 [2024-07-15 07:00:02.659425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.169 [2024-07-15 07:00:02.659436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93096 len:8 PRP1 0x0 PRP2 0x0 00:31:26.169 [2024-07-15 07:00:02.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659505] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26265b0 was disconnected and freed. reset controller. 00:31:26.169 [2024-07-15 07:00:02.659523] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:26.169 [2024-07-15 07:00:02.659556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.169 [2024-07-15 07:00:02.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.169 [2024-07-15 07:00:02.659601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.169 [2024-07-15 07:00:02.659627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.169 [2024-07-15 07:00:02.659657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:02.659676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:26.169 [2024-07-15 07:00:02.659714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245ceb0 (9): Bad file descriptor 00:31:26.169 [2024-07-15 07:00:02.662959] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:26.169 [2024-07-15 07:00:02.695433] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:26.169 [2024-07-15 07:00:07.174154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.169 [2024-07-15 07:00:07.174692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.169 [2024-07-15 07:00:07.174719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.169 [2024-07-15 07:00:07.174747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.169 [2024-07-15 07:00:07.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.169 [2024-07-15 07:00:07.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.174980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.174994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.170 [2024-07-15 07:00:07.175598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.170 [2024-07-15 07:00:07.175612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.175988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.171 [2024-07-15 07:00:07.176553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.171 [2024-07-15 07:00:07.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.176973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.176987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.177001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.177029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.177058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.177086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.172 [2024-07-15 07:00:07.177115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20752 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20760 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20784 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20792 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20808 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.172 [2024-07-15 07:00:07.177551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.172 [2024-07-15 07:00:07.177562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.172 [2024-07-15 07:00:07.177573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20816 len:8 PRP1 0x0 PRP2 0x0 00:31:26.172 [2024-07-15 07:00:07.177585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20824 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20848 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20856 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20872 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.177952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.177963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.177974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20880 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.177987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20888 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20904 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20912 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20936 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20944 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20952 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.173 [2024-07-15 07:00:07.178507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 PRP1 0x0 PRP2 0x0 00:31:26.173 [2024-07-15 07:00:07.178525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.173 [2024-07-15 07:00:07.178544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.173 [2024-07-15 07:00:07.178556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.174 [2024-07-15 07:00:07.178567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20976 len:8 PRP1 0x0 PRP2 0x0 00:31:26.174 [2024-07-15 07:00:07.178579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.174 [2024-07-15 07:00:07.178603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.174 [2024-07-15 07:00:07.178613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 PRP1 0x0 PRP2 0x0 00:31:26.174 [2024-07-15 07:00:07.178626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.174 [2024-07-15 07:00:07.178649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.174 [2024-07-15 07:00:07.178660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:8 PRP1 0x0 PRP2 0x0 00:31:26.174 [2024-07-15 07:00:07.178673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.174 [2024-07-15 07:00:07.178696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.174 [2024-07-15 07:00:07.178707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:31:26.174 [2024-07-15 07:00:07.178720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178776] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24806d0 was disconnected and freed. reset controller. 00:31:26.174 [2024-07-15 07:00:07.178794] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:26.174 [2024-07-15 07:00:07.178827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.174 [2024-07-15 07:00:07.178849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.174 [2024-07-15 07:00:07.178894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.174 [2024-07-15 07:00:07.178922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.174 [2024-07-15 07:00:07.178948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.174 [2024-07-15 07:00:07.178961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:26.174 [2024-07-15 07:00:07.179012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245ceb0 (9): Bad file descriptor 00:31:26.174 [2024-07-15 07:00:07.182298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:26.174 [2024-07-15 07:00:07.340632] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:26.174 00:31:26.174 Latency(us) 00:31:26.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.174 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:26.174 Verification LBA range: start 0x0 length 0x4000 00:31:26.174 NVMe0n1 : 15.01 8409.91 32.85 793.92 0.00 13880.10 758.52 16796.63 00:31:26.174 =================================================================================================================== 00:31:26.174 Total : 8409.91 32.85 793.92 0.00 13880.10 758.52 16796.63 00:31:26.174 Received shutdown signal, test time was about 15.000000 seconds 00:31:26.174 00:31:26.174 Latency(us) 00:31:26.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.174 =================================================================================================================== 00:31:26.174 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=760347 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 760347 /var/tmp/bdevperf.sock 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 760347 ']' 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:26.174 [2024-07-15 07:00:13.559390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.174 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:26.431 [2024-07-15 07:00:13.804069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:26.431 07:00:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:26.689 NVMe0n1 00:31:26.689 07:00:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:26.947 00:31:26.947 07:00:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:27.513 00:31:27.513 07:00:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:27.513 07:00:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:27.772 07:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.032 07:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:31.315 07:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:31.315 07:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:31.315 07:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=761010 00:31:31.315 07:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:31.315 07:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 761010 00:31:32.248 0 00:31:32.248 07:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:32.248 [2024-07-15 07:00:13.092119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:32.248 [2024-07-15 07:00:13.092214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760347 ] 00:31:32.248 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.248 [2024-07-15 07:00:13.152469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.248 [2024-07-15 07:00:13.235466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.248 [2024-07-15 07:00:15.389136] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:32.248 [2024-07-15 07:00:15.389217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.248 [2024-07-15 07:00:15.389254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.248 [2024-07-15 07:00:15.389276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.248 [2024-07-15 07:00:15.389290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.248 [2024-07-15 07:00:15.389304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.248 [2024-07-15 07:00:15.389317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.248 [2024-07-15 07:00:15.389331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.248 [2024-07-15 07:00:15.389344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.248 [2024-07-15 07:00:15.389357] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:32.248 [2024-07-15 07:00:15.389401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:32.248 [2024-07-15 07:00:15.389432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfceb0 (9): Bad file descriptor 00:31:32.248 [2024-07-15 07:00:15.441455] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:32.248 Running I/O for 1 seconds... 00:31:32.248 00:31:32.248 Latency(us) 00:31:32.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.248 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:32.248 Verification LBA range: start 0x0 length 0x4000 00:31:32.248 NVMe0n1 : 1.01 7675.88 29.98 0.00 0.00 16577.47 2330.17 15534.46 00:31:32.248 =================================================================================================================== 00:31:32.248 Total : 7675.88 29.98 0.00 0.00 16577.47 2330.17 15534.46 00:31:32.248 07:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:32.248 07:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:32.817 07:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.076 07:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:33.076 07:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:33.334 07:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.591 07:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:36.883 07:00:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:36.883 07:00:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 760347 ']' 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 760347' 00:31:36.883 killing process with pid 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 760347 00:31:36.883 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:37.141 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:37.141 rmmod nvme_tcp 00:31:37.398 rmmod nvme_fabrics 00:31:37.398 rmmod nvme_keyring 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 757469 ']' 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 757469 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 757469 ']' 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 757469 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 757469 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 757469' 00:31:37.398 killing process with pid 757469 00:31:37.398 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 757469 00:31:37.399 07:00:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 757469 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.656 07:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.564 07:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:39.564 00:31:39.564 real 0m34.776s 00:31:39.564 user 2m0.203s 00:31:39.564 sys 0m6.804s 00:31:39.564 07:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:39.564 07:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:39.564 ************************************ 00:31:39.564 END TEST nvmf_failover 00:31:39.564 ************************************ 00:31:39.564 07:00:27 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:39.564 07:00:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:39.564 07:00:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:39.564 07:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:39.564 ************************************ 00:31:39.564 START TEST nvmf_host_discovery 00:31:39.564 ************************************ 00:31:39.564 07:00:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:39.823 * Looking for test storage... 00:31:39.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:39.823 07:00:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:41.729 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:41.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.729 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:41.730 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:41.730 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:41.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:31:41.730 00:31:41.730 --- 10.0.0.2 ping statistics --- 00:31:41.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.730 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:41.730 00:31:41.730 --- 10.0.0.1 ping statistics --- 00:31:41.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.730 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=763611 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 763611 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 763611 ']' 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:41.730 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.730 [2024-07-15 07:00:29.254274] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:41.730 [2024-07-15 07:00:29.254350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.730 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.730 [2024-07-15 07:00:29.319664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.989 [2024-07-15 07:00:29.404253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.989 [2024-07-15 07:00:29.404307] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.989 [2024-07-15 07:00:29.404320] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.989 [2024-07-15 07:00:29.404330] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.989 [2024-07-15 07:00:29.404339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.989 [2024-07-15 07:00:29.404387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 [2024-07-15 07:00:29.533904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 [2024-07-15 07:00:29.542114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 null0 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 null1 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=763633 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 763633 /tmp/host.sock 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 763633 ']' 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:41.989 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:41.989 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.249 [2024-07-15 07:00:29.614977] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:42.249 [2024-07-15 07:00:29.615056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763633 ] 00:31:42.249 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.249 [2024-07-15 07:00:29.676537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.249 [2024-07-15 07:00:29.770452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.508 07:00:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.508 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 [2024-07-15 07:00:30.175802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:42.770 07:00:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:43.338 [2024-07-15 07:00:30.911777] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:43.338 [2024-07-15 07:00:30.911805] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:43.338 [2024-07-15 07:00:30.911830] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:43.596 [2024-07-15 07:00:30.999136] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:43.596 [2024-07-15 07:00:31.102015] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:43.596 [2024-07-15 07:00:31.102037] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:43.854 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:43.855 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.113 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 [2024-07-15 07:00:31.607890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:44.114 [2024-07-15 07:00:31.608295] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:44.114 [2024-07-15 07:00:31.608331] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:44.114 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.374 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:44.374 07:00:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:44.374 [2024-07-15 07:00:31.736024] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:44.374 [2024-07-15 07:00:31.794492] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:44.374 [2024-07-15 07:00:31.794517] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:44.374 [2024-07-15 07:00:31.794528] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.312 [2024-07-15 07:00:32.827948] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:45.312 [2024-07-15 07:00:32.827978] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:45.312 [2024-07-15 07:00:32.833375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.312 [2024-07-15 07:00:32.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.312 [2024-07-15 07:00:32.833427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.312 [2024-07-15 07:00:32.833442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.312 [2024-07-15 07:00:32.833457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.312 [2024-07-15 07:00:32.833472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.312 [2024-07-15 07:00:32.833487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.312 [2024-07-15 07:00:32.833504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.312 [2024-07-15 07:00:32.833519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.312 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:45.313 [2024-07-15 07:00:32.843380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.313 [2024-07-15 07:00:32.853426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.853631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.853663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.853682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.853707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.853731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.853757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.853774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.853796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 [2024-07-15 07:00:32.863507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.863720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.863750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.863769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.863793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.863817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.863832] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.863847] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.863884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:45.313 [2024-07-15 07:00:32.873585] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.873769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.873801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.873819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.873844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.873868] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.873894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.873925] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.873951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 [2024-07-15 07:00:32.883665] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.883890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.883918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.883935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.883957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.883978] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.883993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.884006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.884026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 [2024-07-15 07:00:32.893754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.893924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.893952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.893968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.893990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.894011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.894025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.894039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.894058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.313 [2024-07-15 07:00:32.903840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.904058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.904085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.904101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.904123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.904144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.904158] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.904171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.904190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:45.313 [2024-07-15 07:00:32.913930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.313 [2024-07-15 07:00:32.914096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.313 [2024-07-15 07:00:32.914123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd45450 with addr=10.0.0.2, port=4420 00:31:45.313 [2024-07-15 07:00:32.914139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd45450 is same with the state(5) to be set 00:31:45.313 [2024-07-15 07:00:32.914161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd45450 (9): Bad file descriptor 00:31:45.313 [2024-07-15 07:00:32.914182] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.313 [2024-07-15 07:00:32.914195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.313 [2024-07-15 07:00:32.914209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.313 [2024-07-15 07:00:32.914228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.313 [2024-07-15 07:00:32.914620] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:45.313 [2024-07-15 07:00:32.914646] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:45.313 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.572 07:00:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.946 [2024-07-15 07:00:34.181668] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:46.946 [2024-07-15 07:00:34.181694] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:46.946 [2024-07-15 07:00:34.181719] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:46.946 [2024-07-15 07:00:34.269067] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:46.946 [2024-07-15 07:00:34.374240] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:46.946 [2024-07-15 07:00:34.374279] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.946 request: 00:31:46.946 { 00:31:46.946 "name": "nvme", 00:31:46.946 "trtype": "tcp", 00:31:46.946 "traddr": "10.0.0.2", 00:31:46.946 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:46.946 "adrfam": "ipv4", 00:31:46.946 "trsvcid": "8009", 00:31:46.946 "wait_for_attach": true, 00:31:46.946 "method": "bdev_nvme_start_discovery", 00:31:46.946 "req_id": 1 00:31:46.946 } 00:31:46.946 Got JSON-RPC error response 00:31:46.946 response: 00:31:46.946 { 00:31:46.946 "code": -17, 00:31:46.946 "message": "File exists" 00:31:46.946 } 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.946 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.947 request: 00:31:46.947 { 00:31:46.947 "name": "nvme_second", 00:31:46.947 "trtype": "tcp", 00:31:46.947 "traddr": "10.0.0.2", 00:31:46.947 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:46.947 "adrfam": "ipv4", 00:31:46.947 "trsvcid": "8009", 00:31:46.947 "wait_for_attach": true, 00:31:46.947 "method": "bdev_nvme_start_discovery", 00:31:46.947 "req_id": 1 00:31:46.947 } 00:31:46.947 Got JSON-RPC error response 00:31:46.947 response: 00:31:46.947 { 00:31:46.947 "code": -17, 00:31:46.947 "message": "File exists" 00:31:46.947 } 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:46.947 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.205 07:00:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.145 [2024-07-15 07:00:35.581775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.145 [2024-07-15 07:00:35.581838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5e8a0 with addr=10.0.0.2, port=8010 00:31:48.145 [2024-07-15 07:00:35.581867] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:48.145 [2024-07-15 07:00:35.581891] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:48.145 [2024-07-15 07:00:35.581914] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:49.082 [2024-07-15 07:00:36.584230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.082 [2024-07-15 07:00:36.584287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd74cd0 with addr=10.0.0.2, port=8010 00:31:49.082 [2024-07-15 07:00:36.584319] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:49.082 [2024-07-15 07:00:36.584336] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:49.082 [2024-07-15 07:00:36.584350] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:50.023 [2024-07-15 07:00:37.586333] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:50.023 request: 00:31:50.023 { 00:31:50.023 "name": "nvme_second", 00:31:50.023 "trtype": "tcp", 00:31:50.023 "traddr": "10.0.0.2", 00:31:50.023 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:50.023 "adrfam": "ipv4", 00:31:50.023 "trsvcid": "8010", 00:31:50.023 "attach_timeout_ms": 3000, 00:31:50.023 "method": "bdev_nvme_start_discovery", 00:31:50.023 "req_id": 1 00:31:50.023 } 00:31:50.023 Got JSON-RPC error response 00:31:50.023 response: 00:31:50.023 { 00:31:50.023 "code": -110, 00:31:50.023 "message": "Connection timed out" 00:31:50.023 } 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 763633 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:50.023 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:50.281 rmmod nvme_tcp 00:31:50.281 rmmod nvme_fabrics 00:31:50.281 rmmod nvme_keyring 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 763611 ']' 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 763611 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 763611 ']' 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 763611 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 763611 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 763611' 00:31:50.281 killing process with pid 763611 00:31:50.281 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 763611 00:31:50.282 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 763611 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.541 07:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.443 07:00:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:52.443 00:31:52.443 real 0m12.824s 00:31:52.443 user 0m18.620s 00:31:52.443 sys 0m2.648s 00:31:52.443 07:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:52.443 07:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.443 ************************************ 00:31:52.443 END TEST nvmf_host_discovery 00:31:52.443 ************************************ 00:31:52.443 07:00:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:52.443 07:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:52.443 07:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:52.443 07:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:52.443 ************************************ 00:31:52.443 START TEST nvmf_host_multipath_status 00:31:52.443 ************************************ 00:31:52.443 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:52.700 * Looking for test storage... 00:31:52.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:52.700 07:00:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.605 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:54.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:54.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:54.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:54.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.606 07:00:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:31:54.606 00:31:54.606 --- 10.0.0.2 ping statistics --- 00:31:54.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.606 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:31:54.606 00:31:54.606 --- 10.0.0.1 ping statistics --- 00:31:54.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.606 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=766655 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 766655 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 766655 ']' 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:54.606 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 [2024-07-15 07:00:42.149259] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:54.606 [2024-07-15 07:00:42.149339] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.606 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.606 [2024-07-15 07:00:42.215975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:54.865 [2024-07-15 07:00:42.307338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.865 [2024-07-15 07:00:42.307399] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.865 [2024-07-15 07:00:42.307414] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.865 [2024-07-15 07:00:42.307427] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.865 [2024-07-15 07:00:42.307439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.865 [2024-07-15 07:00:42.307728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.865 [2024-07-15 07:00:42.307736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=766655 00:31:54.865 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:55.124 [2024-07-15 07:00:42.676067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.124 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:55.382 Malloc0 00:31:55.382 07:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:55.641 07:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:55.898 07:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.156 [2024-07-15 07:00:43.750935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.156 07:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.414 [2024-07-15 07:00:43.987523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=766938 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 766938 /var/tmp/bdevperf.sock 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 766938 ']' 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:56.414 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:56.978 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:56.978 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:56.978 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:56.978 07:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:57.544 Nvme0n1 00:31:57.544 07:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:58.109 Nvme0n1 00:31:58.109 07:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:58.109 07:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:00.011 07:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:00.011 07:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:00.269 07:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:00.526 07:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:01.462 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:01.462 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:01.462 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.462 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:01.720 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.720 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:01.721 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.721 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:01.979 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.979 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:01.979 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.979 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.237 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.237 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.237 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.237 07:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.496 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.496 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:02.496 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.496 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:02.754 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.754 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:02.754 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.754 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.011 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.011 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:03.011 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:03.269 07:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:03.528 07:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.906 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.164 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.164 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.164 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.164 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.423 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.423 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.423 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.423 07:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:05.681 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.681 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:05.681 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.681 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:05.940 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.940 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:05.940 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.940 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:06.198 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.198 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:06.198 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:06.456 07:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:06.714 07:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:07.646 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:07.646 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:07.646 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.646 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:07.904 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.904 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:07.904 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.904 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:08.161 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:08.161 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:08.161 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.162 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:08.423 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.423 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:08.423 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.423 07:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:08.718 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.718 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:08.718 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.718 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:08.975 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.975 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:08.975 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.975 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:09.233 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.233 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:09.233 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:09.233 07:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:09.491 07:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.868 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.126 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.126 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.126 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.126 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.384 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.384 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.384 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.384 07:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:11.643 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.643 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:11.643 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.643 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:11.901 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.901 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:11.901 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.901 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:12.159 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:12.159 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:12.159 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:12.416 07:00:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:12.673 07:01:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:13.612 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:13.612 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:13.612 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.612 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.869 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:13.869 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:13.869 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.869 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.127 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:14.127 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.127 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.127 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.384 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.384 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.384 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.384 07:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:14.642 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.642 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:14.642 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.642 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.899 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:14.899 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:14.899 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.899 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.157 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:15.157 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:15.157 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:15.415 07:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:15.674 07:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:16.609 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:16.609 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:16.609 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.609 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.867 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:16.867 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:16.867 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.867 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.125 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.125 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.125 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.125 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.383 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.383 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.383 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.383 07:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.642 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.642 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:17.642 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.642 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.900 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.900 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:17.900 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.900 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.158 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.158 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:18.416 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:18.416 07:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:18.675 07:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.933 07:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:19.868 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:19.868 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:19.868 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.868 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:20.126 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.126 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:20.126 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.126 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:20.384 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.384 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:20.384 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.384 07:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:20.643 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.643 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:20.643 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.643 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.901 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.901 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.901 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.901 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:21.160 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.160 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:21.160 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.160 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:21.418 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.418 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:21.418 07:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:21.676 07:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:21.936 07:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:22.876 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:22.876 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:22.876 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.876 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:23.133 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.133 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:23.133 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.134 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:23.441 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.441 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:23.441 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.441 07:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:23.700 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.700 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:23.700 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.700 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.958 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:24.214 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.214 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:24.214 07:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:24.471 07:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:24.731 07:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.106 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:26.364 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.364 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:26.364 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.364 07:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:26.622 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.622 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:26.622 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.622 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:26.880 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.880 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:26.880 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.880 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.138 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.138 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:27.138 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.138 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:27.396 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.396 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:27.396 07:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:27.654 07:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:27.914 07:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:28.851 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:28.851 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:28.851 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.851 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.109 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.109 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:29.109 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.109 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:29.367 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.367 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:29.367 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.367 07:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:29.626 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.626 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:29.626 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.626 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:29.884 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.884 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:29.884 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.884 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.141 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.141 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:30.141 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.141 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.400 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.400 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 766938 00:32:30.400 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 766938 ']' 00:32:30.400 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 766938 00:32:30.400 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 766938 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 766938' 00:32:30.401 killing process with pid 766938 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 766938 00:32:30.401 07:01:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 766938 00:32:30.401 Connection closed with partial response: 00:32:30.401 00:32:30.401 00:32:30.671 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 766938 00:32:30.671 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:30.671 [2024-07-15 07:00:44.045267] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:30.671 [2024-07-15 07:00:44.045347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766938 ] 00:32:30.671 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.671 [2024-07-15 07:00:44.104393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.671 [2024-07-15 07:00:44.190780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.671 Running I/O for 90 seconds... 00:32:30.671 [2024-07-15 07:00:59.857045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.671 [2024-07-15 07:00:59.857767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.671 [2024-07-15 07:00:59.857783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.857806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.857822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.857845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.857862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.857935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.857957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.857985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.858981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.858997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.672 [2024-07-15 07:00:59.859705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.672 [2024-07-15 07:00:59.859721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.859747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.859778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.859805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.859821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.859845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.859883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.859913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.859931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.859956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.859977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.860563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.860982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.861005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.673 [2024-07-15 07:00:59.861055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.861969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.861999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.862015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.862044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.862060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.862089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.862106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.673 [2024-07-15 07:00:59.862135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.673 [2024-07-15 07:00:59.862152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:00:59.862842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:00:59.862915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.862962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.862991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.674 [2024-07-15 07:00:59.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:00:59.863636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:00:59.863653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.674 [2024-07-15 07:01:15.322914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.674 [2024-07-15 07:01:15.322935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.322952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.322973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.322989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.675 [2024-07-15 07:01:15.323217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.323971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.323987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.675 [2024-07-15 07:01:15.324518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.675 [2024-07-15 07:01:15.324535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.324972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.324999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.325017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.325040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.325056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.325079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.325096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.325118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.325135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.327779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.327964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.327980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.676 [2024-07-15 07:01:15.328252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.328663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.328709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.328748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.676 [2024-07-15 07:01:15.328769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.676 [2024-07-15 07:01:15.328786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.328808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.328824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.328846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.328863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.328894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.328913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.328935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.328951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.328973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.328989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.329456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.329958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.329975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.677 [2024-07-15 07:01:15.330214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.330237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.330254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.332825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.332851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.332908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.332932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.332970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.332987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.677 [2024-07-15 07:01:15.333008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.677 [2024-07-15 07:01:15.333024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.333970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.333986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.678 [2024-07-15 07:01:15.334440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.334484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.334523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.334560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.678 [2024-07-15 07:01:15.334598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.678 [2024-07-15 07:01:15.334620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.334658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.334695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.334733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.334771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.334809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.334826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.336949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.336975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.337869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.337939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.337956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.338711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.338763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.679 [2024-07-15 07:01:15.338803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.338841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.338889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.679 [2024-07-15 07:01:15.338930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.679 [2024-07-15 07:01:15.338953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.338970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.339323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.339571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.339588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.340845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.340971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.340993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.341009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.341030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.680 [2024-07-15 07:01:15.341047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.341068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.341084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.341106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.680 [2024-07-15 07:01:15.341122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.680 [2024-07-15 07:01:15.341144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.341160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.341181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.341219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.341240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.341256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.341277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.341293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.341318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.341335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.342801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.342845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.342892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.342933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.342970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.342992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.343970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.343987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.344009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.344025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.344047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.344063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.344085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.344101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.344122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-15 07:01:15.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.344161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.681 [2024-07-15 07:01:15.344197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.681 [2024-07-15 07:01:15.345566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.345592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.345637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.345675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.345713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.345966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.345982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.346933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.346970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.346991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.347007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.347051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.347088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.347125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.347162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.347202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.347240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.347256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.348874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.348908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.348936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.348953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.348975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.348991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.682 [2024-07-15 07:01:15.349261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.349298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.682 [2024-07-15 07:01:15.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.682 [2024-07-15 07:01:15.349351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.349926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.349986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.350023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.350039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.350061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.350081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.350104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.350121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.352962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.352987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.683 [2024-07-15 07:01:15.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.683 [2024-07-15 07:01:15.353760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.683 [2024-07-15 07:01:15.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.353798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.353813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.353835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.353851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.353891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.353910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.353933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.353971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.353987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.354884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.354910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.354932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.357438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.357498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.684 [2024-07-15 07:01:15.357537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.357574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.357628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.684 [2024-07-15 07:01:15.357666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.684 [2024-07-15 07:01:15.357688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.357970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.357986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.358024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.358062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.358099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.358137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.358174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.358212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.358991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.359709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.359966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.359982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.360003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.685 [2024-07-15 07:01:15.360019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.360041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.685 [2024-07-15 07:01:15.360061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.685 [2024-07-15 07:01:15.360084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.360100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.360122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.360138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.360991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.361845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.361955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.361972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.362933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.362971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.362993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.363008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.686 [2024-07-15 07:01:15.363045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.686 [2024-07-15 07:01:15.363067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.686 [2024-07-15 07:01:15.363083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.363158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.363195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.363233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.363270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.365981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.365997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.687 [2024-07-15 07:01:15.366243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.366264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.366280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.368565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.368590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.368617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.368635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.368657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.368674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.368696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.687 [2024-07-15 07:01:15.368712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.687 [2024-07-15 07:01:15.368734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.368750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.368772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.368788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.368848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.368903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.368923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.368945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.368984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.369757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.369979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.369995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.370016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.370032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.370054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.370070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.370091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.370107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.370129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.688 [2024-07-15 07:01:15.370145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.371281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.371306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.688 [2024-07-15 07:01:15.371349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.688 [2024-07-15 07:01:15.371366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.371674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.371712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.371749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.371765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.372949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.372970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.372986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.373024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.373137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.373256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.373293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.373331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.373503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.373519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.374885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.689 [2024-07-15 07:01:15.374911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.374955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.374977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.374999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.375016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.375037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.375059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.689 [2024-07-15 07:01:15.375082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.689 [2024-07-15 07:01:15.375098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.690 [2024-07-15 07:01:15.375730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.690 [2024-07-15 07:01:15.375790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.690 [2024-07-15 07:01:15.375806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.690 Received shutdown signal, test time was about 32.157050 seconds 00:32:30.690 00:32:30.690 Latency(us) 00:32:30.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.690 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:30.690 Verification LBA range: start 0x0 length 0x4000 00:32:30.690 Nvme0n1 : 32.16 8010.16 31.29 0.00 0.00 15954.25 1128.68 4026531.84 00:32:30.690 =================================================================================================================== 00:32:30.690 Total : 8010.16 31.29 0.00 0.00 15954.25 1128.68 4026531.84 00:32:30.690 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.950 rmmod nvme_tcp 00:32:30.950 rmmod nvme_fabrics 00:32:30.950 rmmod nvme_keyring 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 766655 ']' 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 766655 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 766655 ']' 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 766655 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 766655 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 766655' 00:32:30.950 killing process with pid 766655 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 766655 00:32:30.950 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 766655 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.210 07:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.743 07:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:33.743 00:32:33.743 real 0m40.738s 00:32:33.743 user 2m3.162s 00:32:33.743 sys 0m10.277s 00:32:33.743 07:01:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:33.743 07:01:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:33.743 ************************************ 00:32:33.743 END TEST nvmf_host_multipath_status 00:32:33.743 ************************************ 00:32:33.743 07:01:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:33.743 07:01:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:33.743 07:01:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:33.743 07:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.743 ************************************ 00:32:33.743 START TEST nvmf_discovery_remove_ifc 00:32:33.743 ************************************ 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:33.743 * Looking for test storage... 00:32:33.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.743 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:33.744 07:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:35.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:35.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:35.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:35.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.641 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:35.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:32:35.642 00:32:35.642 --- 10.0.0.2 ping statistics --- 00:32:35.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.642 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:32:35.642 00:32:35.642 --- 10.0.0.1 ping statistics --- 00:32:35.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.642 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=773043 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 773043 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 773043 ']' 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:35.642 07:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.642 [2024-07-15 07:01:23.009933] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:35.642 [2024-07-15 07:01:23.010015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.642 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.642 [2024-07-15 07:01:23.078209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.642 [2024-07-15 07:01:23.167886] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.642 [2024-07-15 07:01:23.167952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.642 [2024-07-15 07:01:23.167968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.642 [2024-07-15 07:01:23.167981] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.642 [2024-07-15 07:01:23.167994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.642 [2024-07-15 07:01:23.168024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.899 [2024-07-15 07:01:23.319860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.899 [2024-07-15 07:01:23.328107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:35.899 null0 00:32:35.899 [2024-07-15 07:01:23.360001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=773141 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 773141 /tmp/host.sock 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 773141 ']' 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:35.899 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:35.899 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.899 [2024-07-15 07:01:23.425321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:35.899 [2024-07-15 07:01:23.425398] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773141 ] 00:32:35.899 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.899 [2024-07-15 07:01:23.487402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.158 [2024-07-15 07:01:23.579269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.158 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.416 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.416 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:36.416 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.416 07:01:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:37.355 [2024-07-15 07:01:24.827712] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.355 [2024-07-15 07:01:24.827755] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.355 [2024-07-15 07:01:24.827779] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.355 [2024-07-15 07:01:24.956237] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:37.613 [2024-07-15 07:01:25.180559] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:37.613 [2024-07-15 07:01:25.180630] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:37.613 [2024-07-15 07:01:25.180671] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:37.613 [2024-07-15 07:01:25.180697] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.613 [2024-07-15 07:01:25.180735] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:37.613 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.613 [2024-07-15 07:01:25.227563] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e5fdf0 was disconnected and freed. delete nvme_qpair. 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:37.873 07:01:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:38.865 07:01:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:39.802 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.062 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:40.062 07:01:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:41.000 07:01:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:41.934 07:01:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:43.312 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:43.312 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:43.313 07:01:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:43.313 [2024-07-15 07:01:30.621548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:43.313 [2024-07-15 07:01:30.621621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.313 [2024-07-15 07:01:30.621644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.313 [2024-07-15 07:01:30.621663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.313 [2024-07-15 07:01:30.621678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.313 [2024-07-15 07:01:30.621693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.313 [2024-07-15 07:01:30.621708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.313 [2024-07-15 07:01:30.621724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.313 [2024-07-15 07:01:30.621739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.313 [2024-07-15 07:01:30.621755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.313 [2024-07-15 07:01:30.621781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.313 [2024-07-15 07:01:30.621796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26f80 is same with the state(5) to be set 00:32:43.313 [2024-07-15 07:01:30.631570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26f80 (9): Bad file descriptor 00:32:43.313 [2024-07-15 07:01:30.641625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:44.254 [2024-07-15 07:01:31.666940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:44.254 [2024-07-15 07:01:31.667008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e26f80 with addr=10.0.0.2, port=4420 00:32:44.254 [2024-07-15 07:01:31.667036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26f80 is same with the state(5) to be set 00:32:44.254 [2024-07-15 07:01:31.667083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26f80 (9): Bad file descriptor 00:32:44.254 [2024-07-15 07:01:31.667520] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:44.254 [2024-07-15 07:01:31.667554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:44.254 [2024-07-15 07:01:31.667573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:44.254 [2024-07-15 07:01:31.667590] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:44.254 [2024-07-15 07:01:31.667623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.254 [2024-07-15 07:01:31.667642] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:44.254 07:01:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:45.191 [2024-07-15 07:01:32.670136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.191 [2024-07-15 07:01:32.670182] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.191 [2024-07-15 07:01:32.670195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.191 [2024-07-15 07:01:32.670207] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:45.191 [2024-07-15 07:01:32.670241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.191 [2024-07-15 07:01:32.670276] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:45.191 [2024-07-15 07:01:32.670317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.191 [2024-07-15 07:01:32.670339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.191 [2024-07-15 07:01:32.670368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.191 [2024-07-15 07:01:32.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.191 [2024-07-15 07:01:32.670399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.191 [2024-07-15 07:01:32.670413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.191 [2024-07-15 07:01:32.670428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.191 [2024-07-15 07:01:32.670441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.191 [2024-07-15 07:01:32.670456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.191 [2024-07-15 07:01:32.670470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.191 [2024-07-15 07:01:32.670484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:45.191 [2024-07-15 07:01:32.670838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26410 (9): Bad file descriptor 00:32:45.191 [2024-07-15 07:01:32.671857] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:45.191 [2024-07-15 07:01:32.671889] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.191 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:45.192 07:01:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:46.585 07:01:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:47.150 [2024-07-15 07:01:34.683729] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:47.150 [2024-07-15 07:01:34.683754] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:47.150 [2024-07-15 07:01:34.683779] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.408 [2024-07-15 07:01:34.771098] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:47.408 07:01:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:47.408 [2024-07-15 07:01:34.997428] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:47.408 [2024-07-15 07:01:34.997480] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:47.408 [2024-07-15 07:01:34.997515] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:47.408 [2024-07-15 07:01:34.997540] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:47.408 [2024-07-15 07:01:34.997554] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:47.665 [2024-07-15 07:01:35.043561] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e6aa40 was disconnected and freed. delete nvme_qpair. 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 773141 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 773141 ']' 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 773141 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 773141 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 773141' 00:32:48.598 killing process with pid 773141 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 773141 00:32:48.598 07:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 773141 00:32:48.598 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:48.598 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:48.598 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:48.599 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:48.599 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:48.599 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:48.599 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:48.599 rmmod nvme_tcp 00:32:48.599 rmmod nvme_fabrics 00:32:48.858 rmmod nvme_keyring 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 773043 ']' 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 773043 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 773043 ']' 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 773043 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 773043 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 773043' 00:32:48.858 killing process with pid 773043 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 773043 00:32:48.858 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 773043 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.118 07:01:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.027 07:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:51.027 00:32:51.027 real 0m17.732s 00:32:51.027 user 0m25.964s 00:32:51.027 sys 0m2.937s 00:32:51.027 07:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:51.027 07:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:51.027 ************************************ 00:32:51.027 END TEST nvmf_discovery_remove_ifc 00:32:51.027 ************************************ 00:32:51.027 07:01:38 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:51.027 07:01:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:51.027 07:01:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.027 07:01:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:51.027 ************************************ 00:32:51.027 START TEST nvmf_identify_kernel_target 00:32:51.027 ************************************ 00:32:51.027 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:51.287 * Looking for test storage... 00:32:51.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:51.287 07:01:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:53.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:53.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:53.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:53.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.193 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:53.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:32:53.193 00:32:53.193 --- 10.0.0.2 ping statistics --- 00:32:53.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.194 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:53.194 00:32:53.194 --- 10.0.0.1 ping statistics --- 00:32:53.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.194 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:53.194 07:01:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:54.129 Waiting for block devices as requested 00:32:54.386 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:54.386 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.386 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:54.643 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:54.643 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:54.644 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:54.903 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:54.903 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:54.903 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:54.903 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.903 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:55.163 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:55.163 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:55.163 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:55.421 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:55.421 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:55.421 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:55.697 No valid GPT data, bailing 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:55.697 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:55.697 00:32:55.697 Discovery Log Number of Records 2, Generation counter 2 00:32:55.697 =====Discovery Log Entry 0====== 00:32:55.697 trtype: tcp 00:32:55.697 adrfam: ipv4 00:32:55.697 subtype: current discovery subsystem 00:32:55.697 treq: not specified, sq flow control disable supported 00:32:55.697 portid: 1 00:32:55.697 trsvcid: 4420 00:32:55.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:55.697 traddr: 10.0.0.1 00:32:55.697 eflags: none 00:32:55.697 sectype: none 00:32:55.697 =====Discovery Log Entry 1====== 00:32:55.697 trtype: tcp 00:32:55.697 adrfam: ipv4 00:32:55.697 subtype: nvme subsystem 00:32:55.697 treq: not specified, sq flow control disable supported 00:32:55.697 portid: 1 00:32:55.697 trsvcid: 4420 00:32:55.697 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:55.697 traddr: 10.0.0.1 00:32:55.697 eflags: none 00:32:55.697 sectype: none 00:32:55.698 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:55.698 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:55.698 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.698 ===================================================== 00:32:55.698 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:55.698 ===================================================== 00:32:55.698 Controller Capabilities/Features 00:32:55.698 ================================ 00:32:55.698 Vendor ID: 0000 00:32:55.698 Subsystem Vendor ID: 0000 00:32:55.698 Serial Number: 921c89eeebd3eb547be5 00:32:55.698 Model Number: Linux 00:32:55.698 Firmware Version: 6.7.0-68 00:32:55.698 Recommended Arb Burst: 0 00:32:55.698 IEEE OUI Identifier: 00 00 00 00:32:55.698 Multi-path I/O 00:32:55.698 May have multiple subsystem ports: No 00:32:55.698 May have multiple controllers: No 00:32:55.698 Associated with SR-IOV VF: No 00:32:55.698 Max Data Transfer Size: Unlimited 00:32:55.698 Max Number of Namespaces: 0 00:32:55.698 Max Number of I/O Queues: 1024 00:32:55.698 NVMe Specification Version (VS): 1.3 00:32:55.698 NVMe Specification Version (Identify): 1.3 00:32:55.698 Maximum Queue Entries: 1024 00:32:55.698 Contiguous Queues Required: No 00:32:55.698 Arbitration Mechanisms Supported 00:32:55.698 Weighted Round Robin: Not Supported 00:32:55.698 Vendor Specific: Not Supported 00:32:55.698 Reset Timeout: 7500 ms 00:32:55.698 Doorbell Stride: 4 bytes 00:32:55.698 NVM Subsystem Reset: Not Supported 00:32:55.698 Command Sets Supported 00:32:55.698 NVM Command Set: Supported 00:32:55.698 Boot Partition: Not Supported 00:32:55.698 Memory Page Size Minimum: 4096 bytes 00:32:55.698 Memory Page Size Maximum: 4096 bytes 00:32:55.698 Persistent Memory Region: Not Supported 00:32:55.698 Optional Asynchronous Events Supported 00:32:55.698 Namespace Attribute Notices: Not Supported 00:32:55.698 Firmware Activation Notices: Not Supported 00:32:55.698 ANA Change Notices: Not Supported 00:32:55.698 PLE Aggregate Log Change Notices: Not Supported 00:32:55.698 LBA Status Info Alert Notices: Not Supported 00:32:55.698 EGE Aggregate Log Change Notices: Not Supported 00:32:55.698 Normal NVM Subsystem Shutdown event: Not Supported 00:32:55.698 Zone Descriptor Change Notices: Not Supported 00:32:55.698 Discovery Log Change Notices: Supported 00:32:55.698 Controller Attributes 00:32:55.698 128-bit Host Identifier: Not Supported 00:32:55.698 Non-Operational Permissive Mode: Not Supported 00:32:55.698 NVM Sets: Not Supported 00:32:55.698 Read Recovery Levels: Not Supported 00:32:55.698 Endurance Groups: Not Supported 00:32:55.698 Predictable Latency Mode: Not Supported 00:32:55.698 Traffic Based Keep ALive: Not Supported 00:32:55.698 Namespace Granularity: Not Supported 00:32:55.698 SQ Associations: Not Supported 00:32:55.698 UUID List: Not Supported 00:32:55.698 Multi-Domain Subsystem: Not Supported 00:32:55.698 Fixed Capacity Management: Not Supported 00:32:55.698 Variable Capacity Management: Not Supported 00:32:55.698 Delete Endurance Group: Not Supported 00:32:55.698 Delete NVM Set: Not Supported 00:32:55.698 Extended LBA Formats Supported: Not Supported 00:32:55.698 Flexible Data Placement Supported: Not Supported 00:32:55.698 00:32:55.698 Controller Memory Buffer Support 00:32:55.698 ================================ 00:32:55.698 Supported: No 00:32:55.698 00:32:55.698 Persistent Memory Region Support 00:32:55.698 ================================ 00:32:55.698 Supported: No 00:32:55.698 00:32:55.698 Admin Command Set Attributes 00:32:55.698 ============================ 00:32:55.698 Security Send/Receive: Not Supported 00:32:55.698 Format NVM: Not Supported 00:32:55.698 Firmware Activate/Download: Not Supported 00:32:55.698 Namespace Management: Not Supported 00:32:55.698 Device Self-Test: Not Supported 00:32:55.698 Directives: Not Supported 00:32:55.698 NVMe-MI: Not Supported 00:32:55.698 Virtualization Management: Not Supported 00:32:55.698 Doorbell Buffer Config: Not Supported 00:32:55.698 Get LBA Status Capability: Not Supported 00:32:55.698 Command & Feature Lockdown Capability: Not Supported 00:32:55.698 Abort Command Limit: 1 00:32:55.698 Async Event Request Limit: 1 00:32:55.698 Number of Firmware Slots: N/A 00:32:55.698 Firmware Slot 1 Read-Only: N/A 00:32:55.698 Firmware Activation Without Reset: N/A 00:32:55.698 Multiple Update Detection Support: N/A 00:32:55.698 Firmware Update Granularity: No Information Provided 00:32:55.698 Per-Namespace SMART Log: No 00:32:55.698 Asymmetric Namespace Access Log Page: Not Supported 00:32:55.698 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:55.698 Command Effects Log Page: Not Supported 00:32:55.698 Get Log Page Extended Data: Supported 00:32:55.698 Telemetry Log Pages: Not Supported 00:32:55.698 Persistent Event Log Pages: Not Supported 00:32:55.698 Supported Log Pages Log Page: May Support 00:32:55.698 Commands Supported & Effects Log Page: Not Supported 00:32:55.698 Feature Identifiers & Effects Log Page:May Support 00:32:55.698 NVMe-MI Commands & Effects Log Page: May Support 00:32:55.698 Data Area 4 for Telemetry Log: Not Supported 00:32:55.698 Error Log Page Entries Supported: 1 00:32:55.698 Keep Alive: Not Supported 00:32:55.698 00:32:55.698 NVM Command Set Attributes 00:32:55.698 ========================== 00:32:55.698 Submission Queue Entry Size 00:32:55.698 Max: 1 00:32:55.698 Min: 1 00:32:55.698 Completion Queue Entry Size 00:32:55.698 Max: 1 00:32:55.698 Min: 1 00:32:55.698 Number of Namespaces: 0 00:32:55.698 Compare Command: Not Supported 00:32:55.698 Write Uncorrectable Command: Not Supported 00:32:55.698 Dataset Management Command: Not Supported 00:32:55.698 Write Zeroes Command: Not Supported 00:32:55.698 Set Features Save Field: Not Supported 00:32:55.698 Reservations: Not Supported 00:32:55.698 Timestamp: Not Supported 00:32:55.698 Copy: Not Supported 00:32:55.698 Volatile Write Cache: Not Present 00:32:55.698 Atomic Write Unit (Normal): 1 00:32:55.698 Atomic Write Unit (PFail): 1 00:32:55.698 Atomic Compare & Write Unit: 1 00:32:55.698 Fused Compare & Write: Not Supported 00:32:55.698 Scatter-Gather List 00:32:55.698 SGL Command Set: Supported 00:32:55.698 SGL Keyed: Not Supported 00:32:55.698 SGL Bit Bucket Descriptor: Not Supported 00:32:55.698 SGL Metadata Pointer: Not Supported 00:32:55.698 Oversized SGL: Not Supported 00:32:55.698 SGL Metadata Address: Not Supported 00:32:55.698 SGL Offset: Supported 00:32:55.698 Transport SGL Data Block: Not Supported 00:32:55.698 Replay Protected Memory Block: Not Supported 00:32:55.698 00:32:55.698 Firmware Slot Information 00:32:55.698 ========================= 00:32:55.698 Active slot: 0 00:32:55.698 00:32:55.698 00:32:55.698 Error Log 00:32:55.698 ========= 00:32:55.699 00:32:55.699 Active Namespaces 00:32:55.699 ================= 00:32:55.699 Discovery Log Page 00:32:55.699 ================== 00:32:55.699 Generation Counter: 2 00:32:55.699 Number of Records: 2 00:32:55.699 Record Format: 0 00:32:55.699 00:32:55.699 Discovery Log Entry 0 00:32:55.699 ---------------------- 00:32:55.699 Transport Type: 3 (TCP) 00:32:55.699 Address Family: 1 (IPv4) 00:32:55.699 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:55.699 Entry Flags: 00:32:55.699 Duplicate Returned Information: 0 00:32:55.699 Explicit Persistent Connection Support for Discovery: 0 00:32:55.699 Transport Requirements: 00:32:55.699 Secure Channel: Not Specified 00:32:55.699 Port ID: 1 (0x0001) 00:32:55.699 Controller ID: 65535 (0xffff) 00:32:55.699 Admin Max SQ Size: 32 00:32:55.699 Transport Service Identifier: 4420 00:32:55.699 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:55.699 Transport Address: 10.0.0.1 00:32:55.699 Discovery Log Entry 1 00:32:55.699 ---------------------- 00:32:55.699 Transport Type: 3 (TCP) 00:32:55.699 Address Family: 1 (IPv4) 00:32:55.699 Subsystem Type: 2 (NVM Subsystem) 00:32:55.699 Entry Flags: 00:32:55.699 Duplicate Returned Information: 0 00:32:55.699 Explicit Persistent Connection Support for Discovery: 0 00:32:55.699 Transport Requirements: 00:32:55.699 Secure Channel: Not Specified 00:32:55.699 Port ID: 1 (0x0001) 00:32:55.699 Controller ID: 65535 (0xffff) 00:32:55.699 Admin Max SQ Size: 32 00:32:55.699 Transport Service Identifier: 4420 00:32:55.699 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:55.699 Transport Address: 10.0.0.1 00:32:55.699 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.963 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.963 get_feature(0x01) failed 00:32:55.963 get_feature(0x02) failed 00:32:55.963 get_feature(0x04) failed 00:32:55.963 ===================================================== 00:32:55.963 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:55.963 ===================================================== 00:32:55.963 Controller Capabilities/Features 00:32:55.963 ================================ 00:32:55.963 Vendor ID: 0000 00:32:55.963 Subsystem Vendor ID: 0000 00:32:55.963 Serial Number: 1144802389f3b80185ac 00:32:55.963 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:55.963 Firmware Version: 6.7.0-68 00:32:55.963 Recommended Arb Burst: 6 00:32:55.963 IEEE OUI Identifier: 00 00 00 00:32:55.963 Multi-path I/O 00:32:55.963 May have multiple subsystem ports: Yes 00:32:55.963 May have multiple controllers: Yes 00:32:55.963 Associated with SR-IOV VF: No 00:32:55.963 Max Data Transfer Size: Unlimited 00:32:55.963 Max Number of Namespaces: 1024 00:32:55.963 Max Number of I/O Queues: 128 00:32:55.963 NVMe Specification Version (VS): 1.3 00:32:55.963 NVMe Specification Version (Identify): 1.3 00:32:55.963 Maximum Queue Entries: 1024 00:32:55.963 Contiguous Queues Required: No 00:32:55.963 Arbitration Mechanisms Supported 00:32:55.963 Weighted Round Robin: Not Supported 00:32:55.963 Vendor Specific: Not Supported 00:32:55.963 Reset Timeout: 7500 ms 00:32:55.963 Doorbell Stride: 4 bytes 00:32:55.963 NVM Subsystem Reset: Not Supported 00:32:55.963 Command Sets Supported 00:32:55.963 NVM Command Set: Supported 00:32:55.963 Boot Partition: Not Supported 00:32:55.963 Memory Page Size Minimum: 4096 bytes 00:32:55.963 Memory Page Size Maximum: 4096 bytes 00:32:55.963 Persistent Memory Region: Not Supported 00:32:55.963 Optional Asynchronous Events Supported 00:32:55.963 Namespace Attribute Notices: Supported 00:32:55.963 Firmware Activation Notices: Not Supported 00:32:55.963 ANA Change Notices: Supported 00:32:55.963 PLE Aggregate Log Change Notices: Not Supported 00:32:55.963 LBA Status Info Alert Notices: Not Supported 00:32:55.963 EGE Aggregate Log Change Notices: Not Supported 00:32:55.963 Normal NVM Subsystem Shutdown event: Not Supported 00:32:55.963 Zone Descriptor Change Notices: Not Supported 00:32:55.963 Discovery Log Change Notices: Not Supported 00:32:55.963 Controller Attributes 00:32:55.963 128-bit Host Identifier: Supported 00:32:55.963 Non-Operational Permissive Mode: Not Supported 00:32:55.963 NVM Sets: Not Supported 00:32:55.963 Read Recovery Levels: Not Supported 00:32:55.963 Endurance Groups: Not Supported 00:32:55.963 Predictable Latency Mode: Not Supported 00:32:55.963 Traffic Based Keep ALive: Supported 00:32:55.963 Namespace Granularity: Not Supported 00:32:55.963 SQ Associations: Not Supported 00:32:55.963 UUID List: Not Supported 00:32:55.963 Multi-Domain Subsystem: Not Supported 00:32:55.963 Fixed Capacity Management: Not Supported 00:32:55.963 Variable Capacity Management: Not Supported 00:32:55.963 Delete Endurance Group: Not Supported 00:32:55.963 Delete NVM Set: Not Supported 00:32:55.963 Extended LBA Formats Supported: Not Supported 00:32:55.963 Flexible Data Placement Supported: Not Supported 00:32:55.963 00:32:55.963 Controller Memory Buffer Support 00:32:55.963 ================================ 00:32:55.963 Supported: No 00:32:55.963 00:32:55.963 Persistent Memory Region Support 00:32:55.963 ================================ 00:32:55.963 Supported: No 00:32:55.963 00:32:55.963 Admin Command Set Attributes 00:32:55.963 ============================ 00:32:55.963 Security Send/Receive: Not Supported 00:32:55.963 Format NVM: Not Supported 00:32:55.963 Firmware Activate/Download: Not Supported 00:32:55.963 Namespace Management: Not Supported 00:32:55.963 Device Self-Test: Not Supported 00:32:55.963 Directives: Not Supported 00:32:55.963 NVMe-MI: Not Supported 00:32:55.963 Virtualization Management: Not Supported 00:32:55.963 Doorbell Buffer Config: Not Supported 00:32:55.963 Get LBA Status Capability: Not Supported 00:32:55.963 Command & Feature Lockdown Capability: Not Supported 00:32:55.963 Abort Command Limit: 4 00:32:55.963 Async Event Request Limit: 4 00:32:55.963 Number of Firmware Slots: N/A 00:32:55.963 Firmware Slot 1 Read-Only: N/A 00:32:55.963 Firmware Activation Without Reset: N/A 00:32:55.963 Multiple Update Detection Support: N/A 00:32:55.963 Firmware Update Granularity: No Information Provided 00:32:55.963 Per-Namespace SMART Log: Yes 00:32:55.963 Asymmetric Namespace Access Log Page: Supported 00:32:55.963 ANA Transition Time : 10 sec 00:32:55.963 00:32:55.963 Asymmetric Namespace Access Capabilities 00:32:55.963 ANA Optimized State : Supported 00:32:55.963 ANA Non-Optimized State : Supported 00:32:55.963 ANA Inaccessible State : Supported 00:32:55.963 ANA Persistent Loss State : Supported 00:32:55.963 ANA Change State : Supported 00:32:55.963 ANAGRPID is not changed : No 00:32:55.963 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:55.963 00:32:55.963 ANA Group Identifier Maximum : 128 00:32:55.963 Number of ANA Group Identifiers : 128 00:32:55.963 Max Number of Allowed Namespaces : 1024 00:32:55.963 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:55.963 Command Effects Log Page: Supported 00:32:55.963 Get Log Page Extended Data: Supported 00:32:55.963 Telemetry Log Pages: Not Supported 00:32:55.963 Persistent Event Log Pages: Not Supported 00:32:55.963 Supported Log Pages Log Page: May Support 00:32:55.963 Commands Supported & Effects Log Page: Not Supported 00:32:55.963 Feature Identifiers & Effects Log Page:May Support 00:32:55.963 NVMe-MI Commands & Effects Log Page: May Support 00:32:55.963 Data Area 4 for Telemetry Log: Not Supported 00:32:55.963 Error Log Page Entries Supported: 128 00:32:55.963 Keep Alive: Supported 00:32:55.963 Keep Alive Granularity: 1000 ms 00:32:55.963 00:32:55.963 NVM Command Set Attributes 00:32:55.963 ========================== 00:32:55.963 Submission Queue Entry Size 00:32:55.963 Max: 64 00:32:55.963 Min: 64 00:32:55.963 Completion Queue Entry Size 00:32:55.963 Max: 16 00:32:55.963 Min: 16 00:32:55.963 Number of Namespaces: 1024 00:32:55.963 Compare Command: Not Supported 00:32:55.963 Write Uncorrectable Command: Not Supported 00:32:55.963 Dataset Management Command: Supported 00:32:55.963 Write Zeroes Command: Supported 00:32:55.963 Set Features Save Field: Not Supported 00:32:55.963 Reservations: Not Supported 00:32:55.963 Timestamp: Not Supported 00:32:55.963 Copy: Not Supported 00:32:55.963 Volatile Write Cache: Present 00:32:55.963 Atomic Write Unit (Normal): 1 00:32:55.963 Atomic Write Unit (PFail): 1 00:32:55.963 Atomic Compare & Write Unit: 1 00:32:55.963 Fused Compare & Write: Not Supported 00:32:55.963 Scatter-Gather List 00:32:55.963 SGL Command Set: Supported 00:32:55.963 SGL Keyed: Not Supported 00:32:55.963 SGL Bit Bucket Descriptor: Not Supported 00:32:55.963 SGL Metadata Pointer: Not Supported 00:32:55.963 Oversized SGL: Not Supported 00:32:55.963 SGL Metadata Address: Not Supported 00:32:55.963 SGL Offset: Supported 00:32:55.963 Transport SGL Data Block: Not Supported 00:32:55.963 Replay Protected Memory Block: Not Supported 00:32:55.963 00:32:55.963 Firmware Slot Information 00:32:55.963 ========================= 00:32:55.963 Active slot: 0 00:32:55.963 00:32:55.963 Asymmetric Namespace Access 00:32:55.963 =========================== 00:32:55.963 Change Count : 0 00:32:55.963 Number of ANA Group Descriptors : 1 00:32:55.963 ANA Group Descriptor : 0 00:32:55.963 ANA Group ID : 1 00:32:55.963 Number of NSID Values : 1 00:32:55.963 Change Count : 0 00:32:55.963 ANA State : 1 00:32:55.963 Namespace Identifier : 1 00:32:55.963 00:32:55.963 Commands Supported and Effects 00:32:55.963 ============================== 00:32:55.963 Admin Commands 00:32:55.963 -------------- 00:32:55.963 Get Log Page (02h): Supported 00:32:55.963 Identify (06h): Supported 00:32:55.963 Abort (08h): Supported 00:32:55.963 Set Features (09h): Supported 00:32:55.963 Get Features (0Ah): Supported 00:32:55.963 Asynchronous Event Request (0Ch): Supported 00:32:55.963 Keep Alive (18h): Supported 00:32:55.963 I/O Commands 00:32:55.963 ------------ 00:32:55.963 Flush (00h): Supported 00:32:55.963 Write (01h): Supported LBA-Change 00:32:55.963 Read (02h): Supported 00:32:55.963 Write Zeroes (08h): Supported LBA-Change 00:32:55.963 Dataset Management (09h): Supported 00:32:55.963 00:32:55.963 Error Log 00:32:55.963 ========= 00:32:55.963 Entry: 0 00:32:55.963 Error Count: 0x3 00:32:55.963 Submission Queue Id: 0x0 00:32:55.963 Command Id: 0x5 00:32:55.963 Phase Bit: 0 00:32:55.963 Status Code: 0x2 00:32:55.963 Status Code Type: 0x0 00:32:55.963 Do Not Retry: 1 00:32:55.963 Error Location: 0x28 00:32:55.963 LBA: 0x0 00:32:55.963 Namespace: 0x0 00:32:55.963 Vendor Log Page: 0x0 00:32:55.963 ----------- 00:32:55.964 Entry: 1 00:32:55.964 Error Count: 0x2 00:32:55.964 Submission Queue Id: 0x0 00:32:55.964 Command Id: 0x5 00:32:55.964 Phase Bit: 0 00:32:55.964 Status Code: 0x2 00:32:55.964 Status Code Type: 0x0 00:32:55.964 Do Not Retry: 1 00:32:55.964 Error Location: 0x28 00:32:55.964 LBA: 0x0 00:32:55.964 Namespace: 0x0 00:32:55.964 Vendor Log Page: 0x0 00:32:55.964 ----------- 00:32:55.964 Entry: 2 00:32:55.964 Error Count: 0x1 00:32:55.964 Submission Queue Id: 0x0 00:32:55.964 Command Id: 0x4 00:32:55.964 Phase Bit: 0 00:32:55.964 Status Code: 0x2 00:32:55.964 Status Code Type: 0x0 00:32:55.964 Do Not Retry: 1 00:32:55.964 Error Location: 0x28 00:32:55.964 LBA: 0x0 00:32:55.964 Namespace: 0x0 00:32:55.964 Vendor Log Page: 0x0 00:32:55.964 00:32:55.964 Number of Queues 00:32:55.964 ================ 00:32:55.964 Number of I/O Submission Queues: 128 00:32:55.964 Number of I/O Completion Queues: 128 00:32:55.964 00:32:55.964 ZNS Specific Controller Data 00:32:55.964 ============================ 00:32:55.964 Zone Append Size Limit: 0 00:32:55.964 00:32:55.964 00:32:55.964 Active Namespaces 00:32:55.964 ================= 00:32:55.964 get_feature(0x05) failed 00:32:55.964 Namespace ID:1 00:32:55.964 Command Set Identifier: NVM (00h) 00:32:55.964 Deallocate: Supported 00:32:55.964 Deallocated/Unwritten Error: Not Supported 00:32:55.964 Deallocated Read Value: Unknown 00:32:55.964 Deallocate in Write Zeroes: Not Supported 00:32:55.964 Deallocated Guard Field: 0xFFFF 00:32:55.964 Flush: Supported 00:32:55.964 Reservation: Not Supported 00:32:55.964 Namespace Sharing Capabilities: Multiple Controllers 00:32:55.964 Size (in LBAs): 1953525168 (931GiB) 00:32:55.964 Capacity (in LBAs): 1953525168 (931GiB) 00:32:55.964 Utilization (in LBAs): 1953525168 (931GiB) 00:32:55.964 UUID: 2baa9c61-9544-4f24-afbb-e117a68a38e1 00:32:55.964 Thin Provisioning: Not Supported 00:32:55.964 Per-NS Atomic Units: Yes 00:32:55.964 Atomic Boundary Size (Normal): 0 00:32:55.964 Atomic Boundary Size (PFail): 0 00:32:55.964 Atomic Boundary Offset: 0 00:32:55.964 NGUID/EUI64 Never Reused: No 00:32:55.964 ANA group ID: 1 00:32:55.964 Namespace Write Protected: No 00:32:55.964 Number of LBA Formats: 1 00:32:55.964 Current LBA Format: LBA Format #00 00:32:55.964 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:55.964 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:55.964 rmmod nvme_tcp 00:32:55.964 rmmod nvme_fabrics 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:55.964 07:01:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:57.917 07:01:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:59.288 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:59.288 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:59.288 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:00.225 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:00.225 00:33:00.225 real 0m9.140s 00:33:00.225 user 0m1.885s 00:33:00.225 sys 0m3.206s 00:33:00.225 07:01:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:00.225 07:01:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:00.225 ************************************ 00:33:00.225 END TEST nvmf_identify_kernel_target 00:33:00.225 ************************************ 00:33:00.225 07:01:47 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:00.225 07:01:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:00.225 07:01:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:00.225 07:01:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.225 ************************************ 00:33:00.225 START TEST nvmf_auth_host 00:33:00.225 ************************************ 00:33:00.225 07:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:00.225 * Looking for test storage... 00:33:00.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:00.483 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:00.484 07:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:02.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:02.387 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:02.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:02.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:02.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.388 07:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:02.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:33:02.647 00:33:02.647 --- 10.0.0.2 ping statistics --- 00:33:02.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.647 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:33:02.647 00:33:02.647 --- 10.0.0.1 ping statistics --- 00:33:02.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.647 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=780214 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 780214 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 780214 ']' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:02.647 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=08e32231f2594e5ee9b7e676fab02d23 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kBV 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 08e32231f2594e5ee9b7e676fab02d23 0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 08e32231f2594e5ee9b7e676fab02d23 0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=08e32231f2594e5ee9b7e676fab02d23 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kBV 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kBV 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kBV 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=be6e359ed855c0745803b191c9074cecc0a0df9614058cfc8f49c9474baefa6b 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LiR 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key be6e359ed855c0745803b191c9074cecc0a0df9614058cfc8f49c9474baefa6b 3 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 be6e359ed855c0745803b191c9074cecc0a0df9614058cfc8f49c9474baefa6b 3 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=be6e359ed855c0745803b191c9074cecc0a0df9614058cfc8f49c9474baefa6b 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LiR 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LiR 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LiR 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3626c7b389f9d3e182df54d1cbb600b652a665f81bd20ece 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1gp 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3626c7b389f9d3e182df54d1cbb600b652a665f81bd20ece 0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3626c7b389f9d3e182df54d1cbb600b652a665f81bd20ece 0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3626c7b389f9d3e182df54d1cbb600b652a665f81bd20ece 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1gp 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1gp 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1gp 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a5678502188e2a14a98d0052f3f8ace0c9ee538c13231b6 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9GY 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a5678502188e2a14a98d0052f3f8ace0c9ee538c13231b6 2 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a5678502188e2a14a98d0052f3f8ace0c9ee538c13231b6 2 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a5678502188e2a14a98d0052f3f8ace0c9ee538c13231b6 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:02.906 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9GY 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9GY 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9GY 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=752019b6444266a9dc2bc34b1a0bbd96 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.abZ 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 752019b6444266a9dc2bc34b1a0bbd96 1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 752019b6444266a9dc2bc34b1a0bbd96 1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=752019b6444266a9dc2bc34b1a0bbd96 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.abZ 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.abZ 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.abZ 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83ba0b09b1ae3b3ae15ae55f3b285658 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HgB 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83ba0b09b1ae3b3ae15ae55f3b285658 1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83ba0b09b1ae3b3ae15ae55f3b285658 1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83ba0b09b1ae3b3ae15ae55f3b285658 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HgB 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HgB 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HgB 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2f432365215be330c3148f71da0a13e1d6c5292153d51751 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7EA 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2f432365215be330c3148f71da0a13e1d6c5292153d51751 2 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2f432365215be330c3148f71da0a13e1d6c5292153d51751 2 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2f432365215be330c3148f71da0a13e1d6c5292153d51751 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7EA 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7EA 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7EA 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:03.165 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b42877763d5cfec8f05befbcc70ec69d 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8pP 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b42877763d5cfec8f05befbcc70ec69d 0 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b42877763d5cfec8f05befbcc70ec69d 0 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b42877763d5cfec8f05befbcc70ec69d 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8pP 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8pP 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8pP 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ade32bd4914cb5b6c1b72c993ddfeeab46239859e53c872e36d1e9a71211705d 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KqK 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ade32bd4914cb5b6c1b72c993ddfeeab46239859e53c872e36d1e9a71211705d 3 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ade32bd4914cb5b6c1b72c993ddfeeab46239859e53c872e36d1e9a71211705d 3 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ade32bd4914cb5b6c1b72c993ddfeeab46239859e53c872e36d1e9a71211705d 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KqK 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KqK 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KqK 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 780214 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 780214 ']' 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:03.166 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kBV 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LiR ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LiR 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1gp 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9GY ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9GY 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.abZ 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HgB ]] 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HgB 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.425 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7EA 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8pP ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8pP 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KqK 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:03.684 07:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:04.618 Waiting for block devices as requested 00:33:04.618 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:04.877 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:04.877 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:04.877 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.135 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.135 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.135 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.135 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:05.395 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:05.395 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:05.395 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:05.395 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.654 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.654 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.654 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.654 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:05.911 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:06.169 07:01:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:06.427 No valid GPT data, bailing 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:06.427 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:06.427 00:33:06.427 Discovery Log Number of Records 2, Generation counter 2 00:33:06.427 =====Discovery Log Entry 0====== 00:33:06.427 trtype: tcp 00:33:06.427 adrfam: ipv4 00:33:06.427 subtype: current discovery subsystem 00:33:06.427 treq: not specified, sq flow control disable supported 00:33:06.427 portid: 1 00:33:06.427 trsvcid: 4420 00:33:06.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:06.428 traddr: 10.0.0.1 00:33:06.428 eflags: none 00:33:06.428 sectype: none 00:33:06.428 =====Discovery Log Entry 1====== 00:33:06.428 trtype: tcp 00:33:06.428 adrfam: ipv4 00:33:06.428 subtype: nvme subsystem 00:33:06.428 treq: not specified, sq flow control disable supported 00:33:06.428 portid: 1 00:33:06.428 trsvcid: 4420 00:33:06.428 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:06.428 traddr: 10.0.0.1 00:33:06.428 eflags: none 00:33:06.428 sectype: none 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.428 07:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 nvme0n1 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 nvme0n1 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.686 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.944 nvme0n1 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.944 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.203 nvme0n1 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.203 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.204 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.463 nvme0n1 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.463 07:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.463 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.724 nvme0n1 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.724 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.983 nvme0n1 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.983 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.984 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.984 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.984 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.243 nvme0n1 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.243 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.503 nvme0n1 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.503 07:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.503 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.764 nvme0n1 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.764 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.025 nvme0n1 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.025 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.286 nvme0n1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.286 07:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.545 nvme0n1 00:33:09.545 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.545 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.545 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.545 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.545 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.805 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.806 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.066 nvme0n1 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.066 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.325 nvme0n1 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.325 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.584 07:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.842 nvme0n1 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.842 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.446 nvme0n1 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.446 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.447 07:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.015 nvme0n1 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.015 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.016 07:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.584 nvme0n1 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.584 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.149 nvme0n1 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.149 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.150 07:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.715 nvme0n1 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.715 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.975 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.976 07:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.914 nvme0n1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.914 07:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.851 nvme0n1 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.851 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.852 07:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.788 nvme0n1 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.788 07:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.722 nvme0n1 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.722 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.979 07:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 nvme0n1 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.914 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.915 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.175 nvme0n1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.175 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 nvme0n1 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.433 07:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 nvme0n1 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.433 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.691 nvme0n1 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.691 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.692 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:19.692 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.692 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:19.692 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.692 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.951 nvme0n1 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:19.951 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.952 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.212 nvme0n1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.212 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.472 nvme0n1 00:33:20.472 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.472 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.472 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.472 07:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.472 07:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:20.472 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.473 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.732 nvme0n1 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.732 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.991 nvme0n1 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.991 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.252 nvme0n1 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.252 07:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.821 nvme0n1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.821 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.080 nvme0n1 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.080 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.339 nvme0n1 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:22.339 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.340 07:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.598 nvme0n1 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.598 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.118 nvme0n1 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.118 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.119 07:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.685 nvme0n1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.685 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.251 nvme0n1 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.251 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.509 07:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.077 nvme0n1 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.077 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.078 07:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.645 nvme0n1 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.645 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.248 nvme0n1 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.248 07:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.191 nvme0n1 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.191 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.192 07:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.129 nvme0n1 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.129 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.130 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.130 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.130 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.130 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.387 07:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.323 nvme0n1 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.323 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.324 07:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.265 nvme0n1 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.265 07:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.206 nvme0n1 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.206 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.466 nvme0n1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.466 07:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 nvme0n1 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.988 nvme0n1 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.988 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 nvme0n1 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 nvme0n1 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.248 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.507 07:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.507 nvme0n1 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.507 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.766 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.766 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.766 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.766 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.767 nvme0n1 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.767 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.025 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.025 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.025 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.025 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.025 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.026 nvme0n1 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.026 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.284 nvme0n1 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.284 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.543 07:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.543 nvme0n1 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.543 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.803 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.063 nvme0n1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.063 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.323 nvme0n1 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:34.323 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.324 07:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.893 nvme0n1 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.893 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.154 nvme0n1 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.154 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.415 nvme0n1 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.415 07:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.990 nvme0n1 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.990 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.249 07:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.815 nvme0n1 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.815 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.381 nvme0n1 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.381 07:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.949 nvme0n1 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.949 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.950 07:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.519 nvme0n1 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhlMzIyMzFmMjU5NGU1ZWU5YjdlNjc2ZmFiMDJkMjObUJZA: 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmU2ZTM1OWVkODU1YzA3NDU4MDNiMTkxYzkwNzRjZWNjMGEwZGY5NjE0MDU4Y2ZjOGY0OWM5NDc0YmFlZmE2Yr5PPRY=: 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.519 07:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.894 nvme0n1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.894 07:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.829 nvme0n1 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzUyMDE5YjY0NDQyNjZhOWRjMmJjMzRiMWEwYmJkOTbwScGP: 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNiYTBiMDliMWFlM2IzYWUxNWFlNTVmM2IyODU2NTgYFRPq: 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.829 07:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.812 nvme0n1 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmY0MzIzNjUyMTViZTMzMGMzMTQ4ZjcxZGEwYTEzZTFkNmM1MjkyMTUzZDUxNzUxcwpUIA==: 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyODc3NzYzZDVjZmVjOGYwNWJlZmJjYzcwZWM2OWQIoxw4: 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.812 07:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.748 nvme0n1 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:42.748 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRlMzJiZDQ5MTRjYjViNmMxYjcyYzk5M2RkZmVlYWI0NjIzOTg1OWU1M2M4NzJlMzZkMWU5YTcxMjExNzA1ZCoLFrw=: 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.749 07:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.143 nvme0n1 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzYyNmM3YjM4OWY5ZDNlMTgyZGY1NGQxY2JiNjAwYjY1MmE2NjVmODFiZDIwZWNlMwoVCw==: 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGE1Njc4NTAyMTg4ZTJhMTRhOThkMDA1MmYzZjhhY2UwYzllZTUzOGMxMzIzMWI2bphp/Q==: 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.143 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.143 request: 00:33:44.143 { 00:33:44.143 "name": "nvme0", 00:33:44.143 "trtype": "tcp", 00:33:44.143 "traddr": "10.0.0.1", 00:33:44.143 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:44.143 "adrfam": "ipv4", 00:33:44.143 "trsvcid": "4420", 00:33:44.143 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:44.143 "method": "bdev_nvme_attach_controller", 00:33:44.143 "req_id": 1 00:33:44.144 } 00:33:44.144 Got JSON-RPC error response 00:33:44.144 response: 00:33:44.144 { 00:33:44.144 "code": -5, 00:33:44.144 "message": "Input/output error" 00:33:44.144 } 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.144 request: 00:33:44.144 { 00:33:44.144 "name": "nvme0", 00:33:44.144 "trtype": "tcp", 00:33:44.144 "traddr": "10.0.0.1", 00:33:44.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:44.144 "adrfam": "ipv4", 00:33:44.144 "trsvcid": "4420", 00:33:44.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:44.144 "dhchap_key": "key2", 00:33:44.144 "method": "bdev_nvme_attach_controller", 00:33:44.144 "req_id": 1 00:33:44.144 } 00:33:44.144 Got JSON-RPC error response 00:33:44.144 response: 00:33:44.144 { 00:33:44.144 "code": -5, 00:33:44.144 "message": "Input/output error" 00:33:44.144 } 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.144 request: 00:33:44.144 { 00:33:44.144 "name": "nvme0", 00:33:44.144 "trtype": "tcp", 00:33:44.144 "traddr": "10.0.0.1", 00:33:44.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:44.144 "adrfam": "ipv4", 00:33:44.144 "trsvcid": "4420", 00:33:44.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:44.144 "dhchap_key": "key1", 00:33:44.144 "dhchap_ctrlr_key": "ckey2", 00:33:44.144 "method": "bdev_nvme_attach_controller", 00:33:44.144 "req_id": 1 00:33:44.144 } 00:33:44.144 Got JSON-RPC error response 00:33:44.144 response: 00:33:44.144 { 00:33:44.144 "code": -5, 00:33:44.144 "message": "Input/output error" 00:33:44.144 } 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:44.144 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:44.145 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:44.145 rmmod nvme_tcp 00:33:44.145 rmmod nvme_fabrics 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 780214 ']' 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 780214 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 780214 ']' 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 780214 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 780214 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 780214' 00:33:44.403 killing process with pid 780214 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 780214 00:33:44.403 07:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 780214 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.403 07:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:46.944 07:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:47.879 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:47.879 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:47.879 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:47.879 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:47.879 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:47.879 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:47.879 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:48.138 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:48.138 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:48.138 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:49.076 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:49.076 07:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kBV /tmp/spdk.key-null.1gp /tmp/spdk.key-sha256.abZ /tmp/spdk.key-sha384.7EA /tmp/spdk.key-sha512.KqK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:49.076 07:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:50.014 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:50.014 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:50.014 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:50.014 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:50.014 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:50.014 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:50.014 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:50.014 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:50.014 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:50.014 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:50.014 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:50.014 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:50.014 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:50.014 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:50.014 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:50.014 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:50.014 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:50.273 00:33:50.273 real 0m49.997s 00:33:50.273 user 0m47.679s 00:33:50.273 sys 0m5.706s 00:33:50.273 07:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:50.273 07:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.273 ************************************ 00:33:50.273 END TEST nvmf_auth_host 00:33:50.273 ************************************ 00:33:50.273 07:02:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:50.273 07:02:37 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:50.273 07:02:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:50.273 07:02:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:50.273 07:02:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.273 ************************************ 00:33:50.273 START TEST nvmf_digest 00:33:50.273 ************************************ 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:50.273 * Looking for test storage... 00:33:50.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.273 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.531 07:02:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:52.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:52.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:52.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.434 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:52.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:33:52.435 00:33:52.435 --- 10.0.0.2 ping statistics --- 00:33:52.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.435 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:52.435 00:33:52.435 --- 10.0.0.1 ping statistics --- 00:33:52.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.435 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.435 07:02:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.435 ************************************ 00:33:52.435 START TEST nvmf_digest_clean 00:33:52.435 ************************************ 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=789657 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 789657 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 789657 ']' 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:52.435 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.693 [2024-07-15 07:02:40.073163] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:52.693 [2024-07-15 07:02:40.073247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.693 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.693 [2024-07-15 07:02:40.136756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.693 [2024-07-15 07:02:40.219423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.693 [2024-07-15 07:02:40.219479] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.693 [2024-07-15 07:02:40.219493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.693 [2024-07-15 07:02:40.219504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.693 [2024-07-15 07:02:40.219513] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.693 [2024-07-15 07:02:40.219540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.693 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.951 null0 00:33:52.951 [2024-07-15 07:02:40.407453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.951 [2024-07-15 07:02:40.431674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=789677 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 789677 /var/tmp/bperf.sock 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 789677 ']' 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:52.951 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.951 [2024-07-15 07:02:40.480459] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:52.951 [2024-07-15 07:02:40.480530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789677 ] 00:33:52.951 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.951 [2024-07-15 07:02:40.548281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.208 [2024-07-15 07:02:40.642965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.208 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.208 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:53.208 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:53.208 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:53.208 07:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:53.465 07:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.465 07:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.032 nvme0n1 00:33:54.032 07:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:54.032 07:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.032 Running I/O for 2 seconds... 00:33:55.938 00:33:55.938 Latency(us) 00:33:55.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:55.938 nvme0n1 : 2.01 19181.25 74.93 0.00 0.00 6664.69 3301.07 15728.64 00:33:55.938 =================================================================================================================== 00:33:55.938 Total : 19181.25 74.93 0.00 0.00 6664.69 3301.07 15728.64 00:33:55.938 0 00:33:56.198 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:56.198 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:56.198 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:56.198 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:56.198 | select(.opcode=="crc32c") 00:33:56.198 | "\(.module_name) \(.executed)"' 00:33:56.198 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 789677 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 789677 ']' 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 789677 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 789677 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 789677' 00:33:56.457 killing process with pid 789677 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 789677 00:33:56.457 Received shutdown signal, test time was about 2.000000 seconds 00:33:56.457 00:33:56.457 Latency(us) 00:33:56.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.457 =================================================================================================================== 00:33:56.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.457 07:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 789677 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=790083 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 790083 /var/tmp/bperf.sock 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 790083 ']' 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:56.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:56.717 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:56.717 [2024-07-15 07:02:44.134121] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:56.717 [2024-07-15 07:02:44.134198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790083 ] 00:33:56.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:56.717 Zero copy mechanism will not be used. 00:33:56.717 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.717 [2024-07-15 07:02:44.195778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.717 [2024-07-15 07:02:44.287437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.977 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:56.977 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:56.977 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:56.977 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:56.977 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:57.236 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.236 07:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.803 nvme0n1 00:33:57.804 07:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:57.804 07:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:57.804 Zero copy mechanism will not be used. 00:33:57.804 Running I/O for 2 seconds... 00:33:59.743 00:33:59.743 Latency(us) 00:33:59.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:59.743 nvme0n1 : 2.00 3165.28 395.66 0.00 0.00 5050.07 4271.98 10777.03 00:33:59.743 =================================================================================================================== 00:33:59.743 Total : 3165.28 395.66 0.00 0.00 5050.07 4271.98 10777.03 00:33:59.743 0 00:33:59.743 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:59.743 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:59.743 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:59.743 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:59.743 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:59.743 | select(.opcode=="crc32c") 00:33:59.743 | "\(.module_name) \(.executed)"' 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 790083 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 790083 ']' 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 790083 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 790083 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 790083' 00:34:00.001 killing process with pid 790083 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 790083 00:34:00.001 Received shutdown signal, test time was about 2.000000 seconds 00:34:00.001 00:34:00.001 Latency(us) 00:34:00.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.001 =================================================================================================================== 00:34:00.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.001 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 790083 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=790508 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 790508 /var/tmp/bperf.sock 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 790508 ']' 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:00.260 07:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:00.260 [2024-07-15 07:02:47.829926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:00.260 [2024-07-15 07:02:47.830004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790508 ] 00:34:00.260 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.517 [2024-07-15 07:02:47.893849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.517 [2024-07-15 07:02:47.987652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.517 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:00.517 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:00.517 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:00.517 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:00.517 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:00.774 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.775 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.343 nvme0n1 00:34:01.343 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:01.343 07:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:01.343 Running I/O for 2 seconds... 00:34:03.249 00:34:03.249 Latency(us) 00:34:03.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.249 nvme0n1 : 2.00 19433.64 75.91 0.00 0.00 6579.37 4344.79 17864.63 00:34:03.249 =================================================================================================================== 00:34:03.249 Total : 19433.64 75.91 0.00 0.00 6579.37 4344.79 17864.63 00:34:03.249 0 00:34:03.249 07:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:03.249 07:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:03.249 07:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:03.249 07:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:03.249 | select(.opcode=="crc32c") 00:34:03.249 | "\(.module_name) \(.executed)"' 00:34:03.249 07:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 790508 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 790508 ']' 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 790508 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 790508 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 790508' 00:34:03.815 killing process with pid 790508 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 790508 00:34:03.815 Received shutdown signal, test time was about 2.000000 seconds 00:34:03.815 00:34:03.815 Latency(us) 00:34:03.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.815 =================================================================================================================== 00:34:03.815 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 790508 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=791020 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 791020 /var/tmp/bperf.sock 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 791020 ']' 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:03.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:03.815 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:03.815 [2024-07-15 07:02:51.422474] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:03.815 [2024-07-15 07:02:51.422550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791020 ] 00:34:03.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:03.815 Zero copy mechanism will not be used. 00:34:04.073 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.073 [2024-07-15 07:02:51.480702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.073 [2024-07-15 07:02:51.563615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.073 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:04.073 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:04.073 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:04.073 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:04.073 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:04.639 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:04.639 07:02:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:04.898 nvme0n1 00:34:04.898 07:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:04.898 07:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:04.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:04.898 Zero copy mechanism will not be used. 00:34:04.898 Running I/O for 2 seconds... 00:34:07.432 00:34:07.432 Latency(us) 00:34:07.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.432 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:07.432 nvme0n1 : 2.00 3429.16 428.64 0.00 0.00 4654.76 3519.53 13398.47 00:34:07.432 =================================================================================================================== 00:34:07.432 Total : 3429.16 428.64 0.00 0.00 4654.76 3519.53 13398.47 00:34:07.432 0 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:07.432 | select(.opcode=="crc32c") 00:34:07.432 | "\(.module_name) \(.executed)"' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 791020 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 791020 ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 791020 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 791020 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 791020' 00:34:07.432 killing process with pid 791020 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 791020 00:34:07.432 Received shutdown signal, test time was about 2.000000 seconds 00:34:07.432 00:34:07.432 Latency(us) 00:34:07.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.432 =================================================================================================================== 00:34:07.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 791020 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 789657 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 789657 ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 789657 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 789657 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 789657' 00:34:07.432 killing process with pid 789657 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 789657 00:34:07.432 07:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 789657 00:34:07.691 00:34:07.691 real 0m15.206s 00:34:07.691 user 0m30.197s 00:34:07.691 sys 0m4.043s 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:07.691 ************************************ 00:34:07.691 END TEST nvmf_digest_clean 00:34:07.691 ************************************ 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.691 ************************************ 00:34:07.691 START TEST nvmf_digest_error 00:34:07.691 ************************************ 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=791451 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 791451 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 791451 ']' 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:07.691 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.949 [2024-07-15 07:02:55.337318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:07.949 [2024-07-15 07:02:55.337404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.949 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.949 [2024-07-15 07:02:55.405814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.949 [2024-07-15 07:02:55.492066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.949 [2024-07-15 07:02:55.492129] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.949 [2024-07-15 07:02:55.492147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.949 [2024-07-15 07:02:55.492160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.949 [2024-07-15 07:02:55.492172] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.949 [2024-07-15 07:02:55.492209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.949 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.949 [2024-07-15 07:02:55.560786] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.207 null0 00:34:08.207 [2024-07-15 07:02:55.672406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.207 [2024-07-15 07:02:55.696644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=791482 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 791482 /var/tmp/bperf.sock 00:34:08.207 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 791482 ']' 00:34:08.208 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:08.208 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:08.208 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:08.208 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:08.208 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.208 [2024-07-15 07:02:55.742381] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:08.208 [2024-07-15 07:02:55.742467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791482 ] 00:34:08.208 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.208 [2024-07-15 07:02:55.804739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.465 [2024-07-15 07:02:55.892773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.465 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:08.465 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:08.465 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:08.465 07:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.723 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.981 nvme0n1 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:08.981 07:02:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:09.240 Running I/O for 2 seconds... 00:34:09.240 [2024-07-15 07:02:56.713128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.240 [2024-07-15 07:02:56.713184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.240 [2024-07-15 07:02:56.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.240 [2024-07-15 07:02:56.730071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.240 [2024-07-15 07:02:56.730103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.240 [2024-07-15 07:02:56.730137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.240 [2024-07-15 07:02:56.746207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.240 [2024-07-15 07:02:56.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.240 [2024-07-15 07:02:56.746266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.240 [2024-07-15 07:02:56.759665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.759698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.759716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.771073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.771100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.771131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.783520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.783554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.783573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.797444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.797479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.797500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.808873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.808911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.808928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.821372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.821402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.834625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.834659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.834678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.241 [2024-07-15 07:02:56.847525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.241 [2024-07-15 07:02:56.847559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.241 [2024-07-15 07:02:56.847578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.861916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.861948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.861967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.872897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.872928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.872946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.886618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.886649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.886666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.903960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.903993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.914715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.914744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.929585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.929619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.929638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.502 [2024-07-15 07:02:56.944166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.502 [2024-07-15 07:02:56.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.502 [2024-07-15 07:02:56.944215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:56.955992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:56.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:56.956052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:56.971418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:56.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:56.971465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:56.982321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:56.982377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:56.982394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:56.998273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:56.998302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:56.998333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.013212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.013242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.013260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.029442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.029473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.029490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.041092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.041135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.041150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.054282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.054312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.054345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.066268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.066298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.066315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.078543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.078572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.078602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.093742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.093775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.093793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.503 [2024-07-15 07:02:57.107464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.503 [2024-07-15 07:02:57.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.503 [2024-07-15 07:02:57.107511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.118872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.118915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.118934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.131866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.131903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.131926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.145420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.145451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.145468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.158651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.158681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.158698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.174086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.174115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.174132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.185293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.185326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.185345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.201349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.201377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.201409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.212288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.212321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.212340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.764 [2024-07-15 07:02:57.225916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.764 [2024-07-15 07:02:57.225961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.764 [2024-07-15 07:02:57.225977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.241396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.241427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.241444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.252930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.252962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.252993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.266592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.266640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.279466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.279510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.279526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.292526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.292556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.292572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.305691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.305721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.305738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.317411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.317439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.317469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.333166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.333196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.333214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.347592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.347621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.347638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.358796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.358823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.358854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.765 [2024-07-15 07:02:57.373000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:09.765 [2024-07-15 07:02:57.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.765 [2024-07-15 07:02:57.373047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.387793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.387824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.387841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.399940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.399969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.399986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.417453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.417505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.432400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.432430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.444084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.444113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.444130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.459362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.459394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.459413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.471982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.472010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.472041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.483159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.483210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.483230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.499360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.499390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.499407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.513685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.513716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.513733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.525508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.525541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.525560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.540413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.540443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.025 [2024-07-15 07:02:57.540459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.025 [2024-07-15 07:02:57.553854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.025 [2024-07-15 07:02:57.553906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.553923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.567699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.567733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.567752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.580715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.580748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.580767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.595428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.595462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.595495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.609783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.609813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.609830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.621702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.621733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.621749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.026 [2024-07-15 07:02:57.637526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.026 [2024-07-15 07:02:57.637560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.026 [2024-07-15 07:02:57.637579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.650435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.650470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.650488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.666335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.666388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.680546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.680575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.680592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.692039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.692082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.692098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.707605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.707635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.707652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.720427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.720458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.720486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.731971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.732001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.732018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.286 [2024-07-15 07:02:57.743661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.286 [2024-07-15 07:02:57.743690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.286 [2024-07-15 07:02:57.743707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.757465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.757496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.757514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.769630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.769661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.769678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.781947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.781976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.782008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.794910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.794955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.807464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.807494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.807511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.818539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.818584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.818600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.831223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.831257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.831289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.843919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.843963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.843979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.855971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.856001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.856018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.869782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.869809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.869841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.882404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.882434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.882451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.287 [2024-07-15 07:02:57.893828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.287 [2024-07-15 07:02:57.893858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.287 [2024-07-15 07:02:57.893882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.547 [2024-07-15 07:02:57.906596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.547 [2024-07-15 07:02:57.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.547 [2024-07-15 07:02:57.906643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.547 [2024-07-15 07:02:57.917659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.547 [2024-07-15 07:02:57.917686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.547 [2024-07-15 07:02:57.917718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.547 [2024-07-15 07:02:57.933486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.547 [2024-07-15 07:02:57.933517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.547 [2024-07-15 07:02:57.933534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.547 [2024-07-15 07:02:57.944012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.547 [2024-07-15 07:02:57.944042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.547 [2024-07-15 07:02:57.944059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:57.956252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:57.956297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:57.956314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:57.969099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:57.969145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:57.969161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:57.982646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:57.982688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:57.982705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:57.995570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:57.995600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:57.995617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.005863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.005902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.005921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.021633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.021663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.021696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.032596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.032626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.032644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.046701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.046731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.046756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.058978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.059008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.059025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.071308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.071338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.071355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.084576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.084606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.084623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.099184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.099214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.099231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.109738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.109781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.109797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.125295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.125325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.125342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.136413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.136441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.136473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.548 [2024-07-15 07:02:58.150168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.548 [2024-07-15 07:02:58.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.548 [2024-07-15 07:02:58.150222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.809 [2024-07-15 07:02:58.164253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.809 [2024-07-15 07:02:58.164285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.809 [2024-07-15 07:02:58.164301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.809 [2024-07-15 07:02:58.176326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.809 [2024-07-15 07:02:58.176354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.809 [2024-07-15 07:02:58.176372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.809 [2024-07-15 07:02:58.189054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.809 [2024-07-15 07:02:58.189084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.809 [2024-07-15 07:02:58.189101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.809 [2024-07-15 07:02:58.202324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.202355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.214620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.214649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.214666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.225435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.225463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.225494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.240144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.240174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.240190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.253799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.253844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.253860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.265021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.265051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.265095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.281240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.281268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.281298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.294131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.294160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.294178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.305503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.305546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.305562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.321850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.321902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.321934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.334584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.334614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.334630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.345963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.345992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.346010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.360565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.360593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.360624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.373285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.373331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.386397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.386434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.386452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.397064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.397094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.397111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.810 [2024-07-15 07:02:58.409425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:10.810 [2024-07-15 07:02:58.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.810 [2024-07-15 07:02:58.409471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.425639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.425684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.425701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.440138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.440167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.440186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.451629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.451656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.451688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.463892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.463922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.463939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.476820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.476849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.476866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.489912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.489940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.489958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.501597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.501625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.501658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.515321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.515349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.515381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.526683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.526711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.526742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.540487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.540517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.540534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.554459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.554488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.554520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.568638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.568683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.568700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.578992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.579020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.579051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.592246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.592274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.592305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.071 [2024-07-15 07:02:58.608071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.071 [2024-07-15 07:02:58.608100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.071 [2024-07-15 07:02:58.608137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.072 [2024-07-15 07:02:58.622864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.072 [2024-07-15 07:02:58.622900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.072 [2024-07-15 07:02:58.622932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.072 [2024-07-15 07:02:58.635422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.072 [2024-07-15 07:02:58.635452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.072 [2024-07-15 07:02:58.635468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.072 [2024-07-15 07:02:58.649207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.072 [2024-07-15 07:02:58.649236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.072 [2024-07-15 07:02:58.649253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.072 [2024-07-15 07:02:58.660769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.072 [2024-07-15 07:02:58.660799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.072 [2024-07-15 07:02:58.660816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.072 [2024-07-15 07:02:58.675444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.072 [2024-07-15 07:02:58.675471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.072 [2024-07-15 07:02:58.675503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.331 [2024-07-15 07:02:58.690053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a360) 00:34:11.331 [2024-07-15 07:02:58.690083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.331 [2024-07-15 07:02:58.690100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.331 00:34:11.331 Latency(us) 00:34:11.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:11.331 nvme0n1 : 2.01 19180.58 74.92 0.00 0.00 6662.25 3495.25 19612.25 00:34:11.331 =================================================================================================================== 00:34:11.331 Total : 19180.58 74.92 0.00 0.00 6662.25 3495.25 19612.25 00:34:11.331 0 00:34:11.331 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:11.331 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:11.331 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:11.331 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:11.331 | .driver_specific 00:34:11.331 | .nvme_error 00:34:11.331 | .status_code 00:34:11.331 | .command_transient_transport_error' 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 791482 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 791482 ']' 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 791482 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 791482 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 791482' 00:34:11.590 killing process with pid 791482 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 791482 00:34:11.590 Received shutdown signal, test time was about 2.000000 seconds 00:34:11.590 00:34:11.590 Latency(us) 00:34:11.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.590 =================================================================================================================== 00:34:11.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.590 07:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 791482 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=791970 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 791970 /var/tmp/bperf.sock 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 791970 ']' 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:11.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:11.851 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:11.851 [2024-07-15 07:02:59.260234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:11.851 [2024-07-15 07:02:59.260313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791970 ] 00:34:11.851 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:11.851 Zero copy mechanism will not be used. 00:34:11.851 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.851 [2024-07-15 07:02:59.321438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.851 [2024-07-15 07:02:59.411139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.110 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:12.110 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:12.110 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:12.110 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:12.369 07:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:12.629 nvme0n1 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:12.629 07:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:12.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:12.888 Zero copy mechanism will not be used. 00:34:12.888 Running I/O for 2 seconds... 00:34:12.888 [2024-07-15 07:03:00.336948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.336993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.337014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.347666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.347701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.347721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.357959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.357991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.358009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.369211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.369247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.369267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.380469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.380505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.380525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.391095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.391126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.391144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.402097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.402130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.402148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.412995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.413027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.413045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.423262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.423296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.423316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.433520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.433555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.433575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.443518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.443552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.443572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.453180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.453211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.453240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.462781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.462815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.462835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.473625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.473659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.473679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.484084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.484116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.484133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:12.888 [2024-07-15 07:03:00.493070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:12.888 [2024-07-15 07:03:00.493101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.888 [2024-07-15 07:03:00.493128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.158 [2024-07-15 07:03:00.502485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.502520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.502539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.511734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.511767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.511785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.521439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.521473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.521498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.530382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.530412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.530429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.540376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.540416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.540437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.551095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.551126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.551144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.561084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.561115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.561132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.571440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.571473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.581974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.582004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.582021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.592836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.592870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.592898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.603487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.603523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.603542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.614247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.614281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.614300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.625172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.625241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.635988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.636019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.636036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.646239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.646270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.654782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.654815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.664525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.664559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.664578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.674681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.674716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.674735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.683952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.683982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.684015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.694025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.694056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.694073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.703258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.703292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.712251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.712290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.712310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.720891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.720935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.720951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.729572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.729603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.729621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.738283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.738315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.746858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.746917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.755549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.755581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.755600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.159 [2024-07-15 07:03:00.764327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.159 [2024-07-15 07:03:00.764359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.159 [2024-07-15 07:03:00.764378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.422 [2024-07-15 07:03:00.773100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.422 [2024-07-15 07:03:00.773129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.422 [2024-07-15 07:03:00.773165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.781846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.781889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.781924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.790491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.790522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.790541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.799183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.799215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.799234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.807932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.807962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.816801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.816833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.816851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.825567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.825616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.834356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.834388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.834406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.843054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.843082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.843115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.851750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.851805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.860412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.860445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.860474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.869331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.869363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.878005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.878034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.878051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.886958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.886987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.887019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.895661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.895692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.895711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.904291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.904322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.912937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.912981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.912997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.921603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.921635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.921653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.930247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.930279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.930298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.938781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.938812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.938830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.947560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.947593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.956307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.956341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.956361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.964946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.964975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.964992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.973636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.973668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.973687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.982290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.982323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.982341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.990923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.990952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.990968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:00.999547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:00.999578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:00.999597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:01.008236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:01.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:01.008293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:01.016952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:01.016981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:01.016997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.423 [2024-07-15 07:03:01.026727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.423 [2024-07-15 07:03:01.026760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.423 [2024-07-15 07:03:01.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.037399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.037434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.037453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.047125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.047156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.047173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.057261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.057296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.057316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.067978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.068010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.068028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.078211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.078246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.078265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.088985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.089025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.089057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.099557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.099597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.099617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.110267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.110303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.110322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.121157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.121213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.121230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.131858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.131900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.131925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.141518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.141552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.141572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.151302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.151337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.151356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.162264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.162298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.162318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.172939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.172969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.172987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.183517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.183551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.183570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.194095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.194125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.194158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.204618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.204653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.204673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.214758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.214792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.214812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.225110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.225142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.225159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.236008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.236039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.236057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.246464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.246499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.246518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.256986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.257032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.257049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.267856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.267898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.267934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.278539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.278573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.278599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.681 [2024-07-15 07:03:01.286832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.681 [2024-07-15 07:03:01.286865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.681 [2024-07-15 07:03:01.286893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.296363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.296397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.296416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.305794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.305828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.305847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.314964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.314995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.315012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.324097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.324127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.324144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.332901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.332946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.332962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.341853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.341908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.341935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.351471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.351506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.351526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.360539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.360579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.360598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.369640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.369674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.369693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.378387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.378420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.378439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.387065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.387094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.387110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.395728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.395761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.395779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.404325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.404375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.413167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.413222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.413240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.421901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.421956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.421974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.431278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.431331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.440282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.440316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.449277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.449310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.449329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.458363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.458396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.458421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.467211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.467254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.467272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.475899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.475947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.475965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.484535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.484568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.484586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.493184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.939 [2024-07-15 07:03:01.493243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.939 [2024-07-15 07:03:01.493261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.939 [2024-07-15 07:03:01.501616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.501649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.501668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.940 [2024-07-15 07:03:01.509834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.509864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.509904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.940 [2024-07-15 07:03:01.518413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.518464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.940 [2024-07-15 07:03:01.526644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.526677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.526695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.940 [2024-07-15 07:03:01.535286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.535319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.535338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.940 [2024-07-15 07:03:01.544517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:13.940 [2024-07-15 07:03:01.544550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.940 [2024-07-15 07:03:01.544569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.554066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.554096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.554113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.562901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.562946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.562963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.571607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.571640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.571659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.580384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.580417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.580435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.589084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.589114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.589131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.598011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.598040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.598057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.606743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.606776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.606794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.615643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.615677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.199 [2024-07-15 07:03:01.615696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.199 [2024-07-15 07:03:01.624642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.199 [2024-07-15 07:03:01.624675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.624693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.633864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.633920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.633938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.642622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.642654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.642672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.651534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.651567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.660224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.660258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.660283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.669509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.669562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.678293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.678326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.678344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.687024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.687053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.687070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.695693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.695725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.695743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.704593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.704626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.704644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.713340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.713372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.713391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.722675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.722709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.722728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.731588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.731621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.731639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.740957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.740992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.741009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.749844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.749884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.758936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.758984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.767818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.767852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.767871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.776680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.776713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.776731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.785312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.785344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.794415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.794449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.803114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.803160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.200 [2024-07-15 07:03:01.811933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.200 [2024-07-15 07:03:01.811962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.200 [2024-07-15 07:03:01.811979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.820786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.820819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.829488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.829520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.829537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.838230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.838262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.838280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.846901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.846950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.846966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.855675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.855725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.864334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.864367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.864387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.873042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.873073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.873090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.881726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.881777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.890438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.899661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.899694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.899714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.908450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.908484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.908502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.917661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.917694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.917713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.926628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.926679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.935580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.935632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.944253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.944285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.944303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.953282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.953315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.953335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.962697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.962730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.962748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.971826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.971865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.971896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.980801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.980834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.980853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.989939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.989969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.989985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:01.999101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:01.999131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:01.999148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.008143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.008175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.008192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.017265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.017298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.017317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.026144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.026190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.035103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.035149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.043760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.043792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.043831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.052894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.464 [2024-07-15 07:03:02.052952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.464 [2024-07-15 07:03:02.052968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.464 [2024-07-15 07:03:02.061841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.465 [2024-07-15 07:03:02.061884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.465 [2024-07-15 07:03:02.061906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.465 [2024-07-15 07:03:02.070519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.465 [2024-07-15 07:03:02.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.465 [2024-07-15 07:03:02.070569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.761 [2024-07-15 07:03:02.079275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.761 [2024-07-15 07:03:02.079310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.761 [2024-07-15 07:03:02.079329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.087958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.087986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.088003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.096532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.096564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.096583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.105336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.105368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.105386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.113351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.113379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.113396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.121269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.121306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.121324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.129176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.129206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.129223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.137277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.137306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.137338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.145215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.145244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.145260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.153129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.160923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.160968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.168853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.168908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.176735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.176778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.176793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.184594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.184623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.184639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.192444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.192473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.192489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.200277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.200322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.208196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.208225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.208242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.216136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.216165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.216181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.224072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.224116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.231961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.231990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.232006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.239849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.239884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.239904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.247670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.247698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.247714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.255537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.255566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.255589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.263415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.263443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.263459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.271314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.271342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.271358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.279165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.279194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.279210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.287136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.762 [2024-07-15 07:03:02.287165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.762 [2024-07-15 07:03:02.287181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.762 [2024-07-15 07:03:02.295115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.763 [2024-07-15 07:03:02.295143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.763 [2024-07-15 07:03:02.295159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.763 [2024-07-15 07:03:02.303084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.763 [2024-07-15 07:03:02.303112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.763 [2024-07-15 07:03:02.303128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.763 [2024-07-15 07:03:02.310936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.763 [2024-07-15 07:03:02.310964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.763 [2024-07-15 07:03:02.310980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.763 [2024-07-15 07:03:02.318722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.763 [2024-07-15 07:03:02.318750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.763 [2024-07-15 07:03:02.318766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.763 [2024-07-15 07:03:02.326323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c00d50) 00:34:14.763 [2024-07-15 07:03:02.326358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.763 [2024-07-15 07:03:02.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.763 00:34:14.763 Latency(us) 00:34:14.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.763 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:14.763 nvme0n1 : 2.00 3393.89 424.24 0.00 0.00 4707.91 1365.33 14369.37 00:34:14.763 =================================================================================================================== 00:34:14.763 Total : 3393.89 424.24 0.00 0.00 4707.91 1365.33 14369.37 00:34:14.763 0 00:34:14.763 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:14.763 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:14.763 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:14.763 | .driver_specific 00:34:14.763 | .nvme_error 00:34:14.763 | .status_code 00:34:14.763 | .command_transient_transport_error' 00:34:14.763 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 791970 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 791970 ']' 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 791970 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:15.021 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 791970 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 791970' 00:34:15.280 killing process with pid 791970 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 791970 00:34:15.280 Received shutdown signal, test time was about 2.000000 seconds 00:34:15.280 00:34:15.280 Latency(us) 00:34:15.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.280 =================================================================================================================== 00:34:15.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 791970 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=792521 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 792521 /var/tmp/bperf.sock 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 792521 ']' 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:15.280 07:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:15.540 [2024-07-15 07:03:02.906313] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:15.540 [2024-07-15 07:03:02.906385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792521 ] 00:34:15.540 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.540 [2024-07-15 07:03:02.964717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.540 [2024-07-15 07:03:03.049449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:15.812 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:16.389 nvme0n1 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:16.389 07:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:16.389 Running I/O for 2 seconds... 00:34:16.389 [2024-07-15 07:03:03.874701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190edd58 00:34:16.389 [2024-07-15 07:03:03.875815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.389 [2024-07-15 07:03:03.875866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.389 [2024-07-15 07:03:03.888287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe2e8 00:34:16.389 [2024-07-15 07:03:03.889572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.389 [2024-07-15 07:03:03.889606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.389 [2024-07-15 07:03:03.901832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ee5c8 00:34:16.389 [2024-07-15 07:03:03.903285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.389 [2024-07-15 07:03:03.903318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.389 [2024-07-15 07:03:03.914081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e4de8 00:34:16.389 [2024-07-15 07:03:03.915514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.389 [2024-07-15 07:03:03.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.389 [2024-07-15 07:03:03.927618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7c50 00:34:16.389 [2024-07-15 07:03:03.929247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.389 [2024-07-15 07:03:03.929279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.389 [2024-07-15 07:03:03.941001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fac10 00:34:16.390 [2024-07-15 07:03:03.942776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.390 [2024-07-15 07:03:03.942809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.390 [2024-07-15 07:03:03.954337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f5378 00:34:16.390 [2024-07-15 07:03:03.956312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.390 [2024-07-15 07:03:03.956345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.390 [2024-07-15 07:03:03.967608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:16.390 [2024-07-15 07:03:03.969730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.390 [2024-07-15 07:03:03.969762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.390 [2024-07-15 07:03:03.976554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc560 00:34:16.390 [2024-07-15 07:03:03.977478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.390 [2024-07-15 07:03:03.977508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.390 [2024-07-15 07:03:03.989476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ea680 00:34:16.390 [2024-07-15 07:03:03.990417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.390 [2024-07-15 07:03:03.990448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.390 [2024-07-15 07:03:04.002666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eea00 00:34:16.390 [2024-07-15 07:03:04.003760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.003791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.014623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd640 00:34:16.650 [2024-07-15 07:03:04.015712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.015743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.027921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e4de8 00:34:16.650 [2024-07-15 07:03:04.029191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.029222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.041190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e38d0 00:34:16.650 [2024-07-15 07:03:04.042646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.042678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.054453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e0630 00:34:16.650 [2024-07-15 07:03:04.056066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.056097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.067647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd640 00:34:16.650 [2024-07-15 07:03:04.069449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.069480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.080847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f4f40 00:34:16.650 [2024-07-15 07:03:04.083007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.083039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.093372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ef270 00:34:16.650 [2024-07-15 07:03:04.095265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.095293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.101579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd208 00:34:16.650 [2024-07-15 07:03:04.102475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.102503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.113372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e8088 00:34:16.650 [2024-07-15 07:03:04.114270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.114298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.125021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f2d80 00:34:16.650 [2024-07-15 07:03:04.125867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.125903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.136966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190edd58 00:34:16.650 [2024-07-15 07:03:04.138033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.138061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.147848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ea680 00:34:16.650 [2024-07-15 07:03:04.148872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.160774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f31b8 00:34:16.650 [2024-07-15 07:03:04.161934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.161962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.172699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eee38 00:34:16.650 [2024-07-15 07:03:04.173986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.184480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f4f40 00:34:16.650 [2024-07-15 07:03:04.185858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.185895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.196117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e3060 00:34:16.650 [2024-07-15 07:03:04.197394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.197428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.207967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f3a28 00:34:16.650 [2024-07-15 07:03:04.209446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.217518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e6b70 00:34:16.650 [2024-07-15 07:03:04.218329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.218356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.229425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f7538 00:34:16.650 [2024-07-15 07:03:04.230132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.650 [2024-07-15 07:03:04.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.650 [2024-07-15 07:03:04.241485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ecc78 00:34:16.650 [2024-07-15 07:03:04.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.651 [2024-07-15 07:03:04.242372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.651 [2024-07-15 07:03:04.253340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fdeb0 00:34:16.651 [2024-07-15 07:03:04.254549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.651 [2024-07-15 07:03:04.254577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.265383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e6fa8 00:34:16.909 [2024-07-15 07:03:04.266429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.266457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.276296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ee190 00:34:16.909 [2024-07-15 07:03:04.278067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.278095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.286968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e0a68 00:34:16.909 [2024-07-15 07:03:04.287814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.287841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.298832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e3498 00:34:16.909 [2024-07-15 07:03:04.299886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.299913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.309691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd208 00:34:16.909 [2024-07-15 07:03:04.310653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.321661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f4f40 00:34:16.909 [2024-07-15 07:03:04.322821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.322849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.333810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f0bc0 00:34:16.909 [2024-07-15 07:03:04.335125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.335153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.344623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e5658 00:34:16.909 [2024-07-15 07:03:04.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.345593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.356671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f9b30 00:34:16.909 [2024-07-15 07:03:04.357410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.357438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.370059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f57b0 00:34:16.909 [2024-07-15 07:03:04.371627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.371655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.382039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe2e8 00:34:16.909 [2024-07-15 07:03:04.383767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.383794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.392760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e3498 00:34:16.909 [2024-07-15 07:03:04.394067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.394095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.404233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed920 00:34:16.909 [2024-07-15 07:03:04.405624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.405652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.415391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f6458 00:34:16.909 [2024-07-15 07:03:04.416833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.416863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.428646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eee38 00:34:16.909 [2024-07-15 07:03:04.430246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.430277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.441945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f1ca0 00:34:16.909 [2024-07-15 07:03:04.443720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.443751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.455283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e99d8 00:34:16.909 [2024-07-15 07:03:04.457224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.457256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.468213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7c50 00:34:16.909 [2024-07-15 07:03:04.470171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.470199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.476468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e4578 00:34:16.909 [2024-07-15 07:03:04.477270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.477298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.487535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e9168 00:34:16.909 [2024-07-15 07:03:04.488386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.488414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.500530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7818 00:34:16.909 [2024-07-15 07:03:04.501563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.501599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.909 [2024-07-15 07:03:04.512456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e4140 00:34:16.909 [2024-07-15 07:03:04.513560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.909 [2024-07-15 07:03:04.513588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.524680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fb8b8 00:34:17.167 [2024-07-15 07:03:04.526046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.536904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f1430 00:34:17.167 [2024-07-15 07:03:04.538419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.538447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.546556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190df118 00:34:17.167 [2024-07-15 07:03:04.547517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.547546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.558429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fbcf0 00:34:17.167 [2024-07-15 07:03:04.559279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.559307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.570371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e6738 00:34:17.167 [2024-07-15 07:03:04.571062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.571089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.582483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190df550 00:34:17.167 [2024-07-15 07:03:04.583359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.583387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.596005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e88f8 00:34:17.167 [2024-07-15 07:03:04.597760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.597787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.608055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f9b30 00:34:17.167 [2024-07-15 07:03:04.609971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.609999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.616300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eaef0 00:34:17.167 [2024-07-15 07:03:04.617190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.617217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.627262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fb480 00:34:17.167 [2024-07-15 07:03:04.628080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.628107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.639388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f1ca0 00:34:17.167 [2024-07-15 07:03:04.640444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.640471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.651565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7818 00:34:17.167 [2024-07-15 07:03:04.652748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.663706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e0a68 00:34:17.167 [2024-07-15 07:03:04.665008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.674456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e0ea0 00:34:17.167 [2024-07-15 07:03:04.675314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.675342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.686118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e84c0 00:34:17.167 [2024-07-15 07:03:04.686770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.686797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.698088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ef6a8 00:34:17.167 [2024-07-15 07:03:04.698886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.698913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.710000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e12d8 00:34:17.167 [2024-07-15 07:03:04.710998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.711035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.720863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe720 00:34:17.167 [2024-07-15 07:03:04.722602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.722631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.731673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc128 00:34:17.167 [2024-07-15 07:03:04.732598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.732625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.743724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ec840 00:34:17.167 [2024-07-15 07:03:04.744796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.744823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.754693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e5a90 00:34:17.167 [2024-07-15 07:03:04.755708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.167 [2024-07-15 07:03:04.755735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.167 [2024-07-15 07:03:04.767654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f1430 00:34:17.167 [2024-07-15 07:03:04.768867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.168 [2024-07-15 07:03:04.768902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.168 [2024-07-15 07:03:04.779601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fa3a0 00:34:17.168 [2024-07-15 07:03:04.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.168 [2024-07-15 07:03:04.781002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.790631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190df550 00:34:17.426 [2024-07-15 07:03:04.791995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.792022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.802717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f6458 00:34:17.426 [2024-07-15 07:03:04.804189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.804221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.814953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e1b48 00:34:17.426 [2024-07-15 07:03:04.816547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.816575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.825853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe720 00:34:17.426 [2024-07-15 07:03:04.827053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.827081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.837568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd208 00:34:17.426 [2024-07-15 07:03:04.838594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.838621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.851010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f2d80 00:34:17.426 [2024-07-15 07:03:04.853014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.853041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.859393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e12d8 00:34:17.426 [2024-07-15 07:03:04.860313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.860340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.870498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:17.426 [2024-07-15 07:03:04.871391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.871418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.883556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190efae0 00:34:17.426 [2024-07-15 07:03:04.884633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.884662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.894579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e5ec8 00:34:17.426 [2024-07-15 07:03:04.895625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.895653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.426 [2024-07-15 07:03:04.907696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f7100 00:34:17.426 [2024-07-15 07:03:04.908976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.426 [2024-07-15 07:03:04.909004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.920982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e88f8 00:34:17.427 [2024-07-15 07:03:04.922426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.922457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.934203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f35f0 00:34:17.427 [2024-07-15 07:03:04.935819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.935850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.947423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e5ec8 00:34:17.427 [2024-07-15 07:03:04.949238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.949269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.960666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190feb58 00:34:17.427 [2024-07-15 07:03:04.962585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.962617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.973905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f4f40 00:34:17.427 [2024-07-15 07:03:04.976045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.976072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.982836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f6020 00:34:17.427 [2024-07-15 07:03:04.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.983771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:04.995984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eff18 00:34:17.427 [2024-07-15 07:03:04.997091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:04.997118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:05.009292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f6890 00:34:17.427 [2024-07-15 07:03:05.010534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:05.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:05.022131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f2948 00:34:17.427 [2024-07-15 07:03:05.023392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:05.023422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.427 [2024-07-15 07:03:05.035140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ec840 00:34:17.427 [2024-07-15 07:03:05.036580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.427 [2024-07-15 07:03:05.036611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.686 [2024-07-15 07:03:05.048061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eee38 00:34:17.686 [2024-07-15 07:03:05.049513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.049545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.061112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e12d8 00:34:17.687 [2024-07-15 07:03:05.062732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.062764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.073122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f0ff8 00:34:17.687 [2024-07-15 07:03:05.074720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.074752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.085617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:17.687 [2024-07-15 07:03:05.087227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.087257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.096694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f0350 00:34:17.687 [2024-07-15 07:03:05.097970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.097998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.108969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f35f0 00:34:17.687 [2024-07-15 07:03:05.110048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.120380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e1b48 00:34:17.687 [2024-07-15 07:03:05.122260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.122296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.131237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fa7d8 00:34:17.687 [2024-07-15 07:03:05.132136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.132174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.144439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190de038 00:34:17.687 [2024-07-15 07:03:05.145506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.145537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.157683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f5378 00:34:17.687 [2024-07-15 07:03:05.158970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.158998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.170479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e8d30 00:34:17.687 [2024-07-15 07:03:05.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.182835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e3498 00:34:17.687 [2024-07-15 07:03:05.184349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.184376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.195591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190de038 00:34:17.687 [2024-07-15 07:03:05.197144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.197170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.208240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ef6a8 00:34:17.687 [2024-07-15 07:03:05.210192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.210223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.221427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f7538 00:34:17.687 [2024-07-15 07:03:05.223552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.223583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.230454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f8a50 00:34:17.687 [2024-07-15 07:03:05.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.231403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.243742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e84c0 00:34:17.687 [2024-07-15 07:03:05.244852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.244889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.257022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f2948 00:34:17.687 [2024-07-15 07:03:05.258306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.270265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190df550 00:34:17.687 [2024-07-15 07:03:05.271678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.271709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.283430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e12d8 00:34:17.687 [2024-07-15 07:03:05.285057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.285085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.687 [2024-07-15 07:03:05.295421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7818 00:34:17.687 [2024-07-15 07:03:05.297058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.687 [2024-07-15 07:03:05.297086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.308623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:17.946 [2024-07-15 07:03:05.310420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.310450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.321951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe2e8 00:34:17.946 [2024-07-15 07:03:05.323921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.323949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.335125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190feb58 00:34:17.946 [2024-07-15 07:03:05.337278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.337309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.344098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e0630 00:34:17.946 [2024-07-15 07:03:05.345056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.345084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.357435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e88f8 00:34:17.946 [2024-07-15 07:03:05.358517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.358548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.369627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190de038 00:34:17.946 [2024-07-15 07:03:05.370718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.370748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.382818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f5378 00:34:17.946 [2024-07-15 07:03:05.384080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.395995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e8d30 00:34:17.946 [2024-07-15 07:03:05.397416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.409294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed920 00:34:17.946 [2024-07-15 07:03:05.410873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.946 [2024-07-15 07:03:05.410925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.946 [2024-07-15 07:03:05.422481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190de038 00:34:17.947 [2024-07-15 07:03:05.424237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.424267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.434296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eaef0 00:34:17.947 [2024-07-15 07:03:05.435533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.447097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f96f8 00:34:17.947 [2024-07-15 07:03:05.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.448231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.459054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190dfdc0 00:34:17.947 [2024-07-15 07:03:05.460888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.460934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.469859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e4140 00:34:17.947 [2024-07-15 07:03:05.470782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.470811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.483086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:17.947 [2024-07-15 07:03:05.484147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.484202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.496373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc998 00:34:17.947 [2024-07-15 07:03:05.497624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.497655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.509601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e9168 00:34:17.947 [2024-07-15 07:03:05.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.511085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.522843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e7c50 00:34:17.947 [2024-07-15 07:03:05.524439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.524470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.536128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ed4e8 00:34:17.947 [2024-07-15 07:03:05.537892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.537936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:17.947 [2024-07-15 07:03:05.549318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fe2e8 00:34:17.947 [2024-07-15 07:03:05.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.947 [2024-07-15 07:03:05.551267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:18.207 [2024-07-15 07:03:05.562547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e1b48 00:34:18.208 [2024-07-15 07:03:05.564693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.564729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.571585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fbcf0 00:34:18.208 [2024-07-15 07:03:05.572511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.572541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.584474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ef270 00:34:18.208 [2024-07-15 07:03:05.585389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.585419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.597485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e88f8 00:34:18.208 [2024-07-15 07:03:05.598560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.598591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.609444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e73e0 00:34:18.208 [2024-07-15 07:03:05.610526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.610555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.622648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e9168 00:34:18.208 [2024-07-15 07:03:05.623936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.623963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.635860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc998 00:34:18.208 [2024-07-15 07:03:05.637276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.637306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.649008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f5378 00:34:18.208 [2024-07-15 07:03:05.650615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.662247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e73e0 00:34:18.208 [2024-07-15 07:03:05.664058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.675464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fdeb0 00:34:18.208 [2024-07-15 07:03:05.677384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.677414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.688654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ecc78 00:34:18.208 [2024-07-15 07:03:05.690787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.690818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.697652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e6fa8 00:34:18.208 [2024-07-15 07:03:05.698568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.698598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.709598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fd208 00:34:18.208 [2024-07-15 07:03:05.710519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.710549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.722921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ddc00 00:34:18.208 [2024-07-15 07:03:05.724061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.724089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.736130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ec408 00:34:18.208 [2024-07-15 07:03:05.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.737422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.749518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f5be8 00:34:18.208 [2024-07-15 07:03:05.750957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.750986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.762736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f81e0 00:34:18.208 [2024-07-15 07:03:05.764348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.764379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.775997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190ddc00 00:34:18.208 [2024-07-15 07:03:05.777751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.777782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.789235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190dfdc0 00:34:18.208 [2024-07-15 07:03:05.791179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.791210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.802442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190eaef0 00:34:18.208 [2024-07-15 07:03:05.804534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.804565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:18.208 [2024-07-15 07:03:05.811431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f1430 00:34:18.208 [2024-07-15 07:03:05.812336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.208 [2024-07-15 07:03:05.812366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:18.467 [2024-07-15 07:03:05.824690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190f3e60 00:34:18.467 [2024-07-15 07:03:05.825783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.467 [2024-07-15 07:03:05.825814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:18.467 [2024-07-15 07:03:05.837866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc128 00:34:18.467 [2024-07-15 07:03:05.839233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.467 [2024-07-15 07:03:05.839264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:18.467 [2024-07-15 07:03:05.849910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190e9168 00:34:18.467 [2024-07-15 07:03:05.851155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.467 [2024-07-15 07:03:05.851199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:18.467 [2024-07-15 07:03:05.863088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4bc0) with pdu=0x2000190fc998 00:34:18.467 [2024-07-15 07:03:05.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.467 [2024-07-15 07:03:05.864542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:18.467 00:34:18.467 Latency(us) 00:34:18.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.467 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.467 nvme0n1 : 2.00 20925.23 81.74 0.00 0.00 6107.81 2318.03 15243.19 00:34:18.467 =================================================================================================================== 00:34:18.467 Total : 20925.23 81.74 0.00 0.00 6107.81 2318.03 15243.19 00:34:18.467 0 00:34:18.467 07:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:18.467 07:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:18.467 07:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:18.467 07:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:18.467 | .driver_specific 00:34:18.467 | .nvme_error 00:34:18.467 | .status_code 00:34:18.467 | .command_transient_transport_error' 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 792521 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 792521 ']' 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 792521 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 792521 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 792521' 00:34:18.725 killing process with pid 792521 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 792521 00:34:18.725 Received shutdown signal, test time was about 2.000000 seconds 00:34:18.725 00:34:18.725 Latency(us) 00:34:18.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.725 =================================================================================================================== 00:34:18.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:18.725 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 792521 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=792927 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 792927 /var/tmp/bperf.sock 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 792927 ']' 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:18.984 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:18.984 [2024-07-15 07:03:06.457482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:18.984 [2024-07-15 07:03:06.457560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792927 ] 00:34:18.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:18.984 Zero copy mechanism will not be used. 00:34:18.984 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.984 [2024-07-15 07:03:06.522823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.242 [2024-07-15 07:03:06.612727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.242 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:19.242 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:19.242 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:19.242 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:19.500 07:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:20.066 nvme0n1 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:20.066 07:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:20.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:20.066 Zero copy mechanism will not be used. 00:34:20.066 Running I/O for 2 seconds... 00:34:20.066 [2024-07-15 07:03:07.533817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.534228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.534270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.544474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.544839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.544888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.556034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.556416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.565844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.566204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.566251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.576016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.576387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.576426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.586788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.587145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.587174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.596757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.597131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.597160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.607552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.607939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.607968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.617040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.617232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.617263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.627009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.627325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.627354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.636977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.637133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.637161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.646616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.646952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.646996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.656363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.656677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.656705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.666516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.666830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.666858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.066 [2024-07-15 07:03:07.676639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.066 [2024-07-15 07:03:07.676959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.066 [2024-07-15 07:03:07.676988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.686552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.686971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.696993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.697322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.697349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.707112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.707477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.707510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.716692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.717074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.717101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.726658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.726978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.727011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.736600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.736931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.736960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.746997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.747314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.747342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.755996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.756106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.756134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.765838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.766200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.766227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.775479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.775790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.775817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.785418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.785760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.785789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.795682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.796014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.796044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.804908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.805247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.805274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.814540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.814908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.823964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.824275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.824303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.834150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.834463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.834491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.844053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.844382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.853922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.854275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.865823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.866183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.866212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.876589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.876940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.876968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.885575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.885968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.885997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.895388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.895713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.895741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.905185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.905529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.905558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.326 [2024-07-15 07:03:07.914824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.326 [2024-07-15 07:03:07.915174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.326 [2024-07-15 07:03:07.915202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.327 [2024-07-15 07:03:07.924125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.327 [2024-07-15 07:03:07.924455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.327 [2024-07-15 07:03:07.924483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.327 [2024-07-15 07:03:07.933650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.327 [2024-07-15 07:03:07.933988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.327 [2024-07-15 07:03:07.934017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.942635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.943014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.943043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.951928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.952276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.952319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.961986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.962312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.962340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.971859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.972063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.972091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.981099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.981456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.981504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.989325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.989685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.989713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:07.998479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:07.998793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:07.998821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.007777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.008126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.008155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.016383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.025509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.025861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.025896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.034361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.034773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.034800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.043602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.044020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.044048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.052715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.053141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.053170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.061976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.062284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.062327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.070533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.070934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.079848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.080151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.080179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.088416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.088729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.088757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.096246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.096542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.096569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.105699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.106041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.106069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.114461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.114816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.114858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.122580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.122916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.122945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.131659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.131942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.140664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.141040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.149237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.156475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.156755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.156782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.164554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.164864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.164899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.172976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.173278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.173306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.181552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.587 [2024-07-15 07:03:08.181866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.587 [2024-07-15 07:03:08.181900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.587 [2024-07-15 07:03:08.190075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.588 [2024-07-15 07:03:08.190409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.588 [2024-07-15 07:03:08.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.588 [2024-07-15 07:03:08.198644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.588 [2024-07-15 07:03:08.198940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.588 [2024-07-15 07:03:08.198968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.207196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.207554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.207582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.214766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.215112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.215140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.223246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.223605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.223637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.231811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.232054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.232083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.241099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.241411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.248666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.249026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.249054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.256413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.256737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.256765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.265155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.265527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.265555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.273189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.273470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.273498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.281842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.282116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.290561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.290885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.290913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.298560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.298864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.298903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.847 [2024-07-15 07:03:08.307670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.847 [2024-07-15 07:03:08.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.847 [2024-07-15 07:03:08.308047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.315914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.316281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.316310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.324730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.325050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.325078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.333171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.333487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.333515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.342089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.342429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.342457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.350269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.350552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.350585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.359096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.359409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.359436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.368065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.368450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.368478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.376100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.376350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.376378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.384247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.384559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.384586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.392389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.392752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.392780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.400505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.400834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.400872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.408562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.408841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.408869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.416742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.417058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.417087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.425402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.425729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.425756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.434466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.434828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.434856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.443860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.444298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.444326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.451923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.452272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.452300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:20.848 [2024-07-15 07:03:08.460609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:20.848 [2024-07-15 07:03:08.460925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.848 [2024-07-15 07:03:08.460953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.469050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.469333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.469360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.476734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.477045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.477073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.485079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.485430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.485457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.493283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.493585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.493613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.501871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.502213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.502241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.510468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.510775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.510802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.518223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.518537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.518564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.526151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.526506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.526534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.534308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.534628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.534654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.541466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.541754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.541781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.549757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.550048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.550076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.557516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.557874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.566518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.566767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.566799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.574802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.575124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.583056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.583293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.583321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.590426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.590779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.598194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.598514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.598541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.605574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.605984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.614281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.614640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.614668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.623326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.623617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.632141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.632478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.632506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.639513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.639872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.639908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.647174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.647518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.647546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.656070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.656424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.656451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.664719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.665045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.665072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.672660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.672962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.672990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.680918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.681277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.681304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.690016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.108 [2024-07-15 07:03:08.690337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.108 [2024-07-15 07:03:08.690365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.108 [2024-07-15 07:03:08.697843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.109 [2024-07-15 07:03:08.698193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.109 [2024-07-15 07:03:08.698221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.109 [2024-07-15 07:03:08.706227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.109 [2024-07-15 07:03:08.706479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.109 [2024-07-15 07:03:08.706507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.109 [2024-07-15 07:03:08.714009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.109 [2024-07-15 07:03:08.714279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.109 [2024-07-15 07:03:08.714307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.721854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.722179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.722208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.729658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.729980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.737371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.737684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.737712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.746041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.746346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.746373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.753278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.753630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.761262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.761590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.761618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.769343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.769662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.778314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.778626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.778658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.786647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.786944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.786971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.794056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.794419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.794447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.803390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.803676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.803703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.812073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.812431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.812459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.820218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.820527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.820555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.828257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.828616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.828644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.836590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.836907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.845258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.845544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.845572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.853837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.854199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.854227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.862078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.862338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.862365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.870645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.870996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.879517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.879862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.879896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.887902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.888144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.888174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.896757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.897065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.897093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.904128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.904468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.912716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.912970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.368 [2024-07-15 07:03:08.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.368 [2024-07-15 07:03:08.920976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.368 [2024-07-15 07:03:08.921270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.921303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.929984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.930308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.930338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.938325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.938724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.947316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.947679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.947707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.955927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.956192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.956220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.964712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.964981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.965010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.973232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.369 [2024-07-15 07:03:08.973445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.369 [2024-07-15 07:03:08.973472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.369 [2024-07-15 07:03:08.981791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.628 [2024-07-15 07:03:08.982020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.628 [2024-07-15 07:03:08.982049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.628 [2024-07-15 07:03:08.990151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.628 [2024-07-15 07:03:08.990421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.628 [2024-07-15 07:03:08.990449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.628 [2024-07-15 07:03:08.998514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.628 [2024-07-15 07:03:08.998779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:08.998807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.007747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.008141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.016430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.016683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.016711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.025349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.025607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.025635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.033792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.034053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.034081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.042839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.043120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.043148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.050537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.050903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.050930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.059429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.059763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.059791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.068135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.068451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.068480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.077030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.077278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.077306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.085399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.085648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.093541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.093814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.101941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.102253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.110302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.110620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.119101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.119330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.119357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.127498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.127730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.127757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.135498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.135764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.135793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.143982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.144258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.144292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.152364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.152688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.152717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.160889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.161251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.161279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.169717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.170017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.170045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.178178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.178451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.187188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.187485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.187512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.195633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.195958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.195986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.204236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.204479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.204507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.213399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.213584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.222163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.222455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.222483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.231281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.231522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.231549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.629 [2024-07-15 07:03:09.239797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.629 [2024-07-15 07:03:09.239999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.629 [2024-07-15 07:03:09.240027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.247584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.247907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.247935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.256650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.256979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.257007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.265549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.265859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.265903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.273870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.274197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.274225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.282534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.282839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.282866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.290869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.291239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.291266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.299332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.299693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.299720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.307003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.307274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.307301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.315635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.315971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.316000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.324629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.324913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.324942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.333626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.333883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.333911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.342562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.342785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.342813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.350862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.351099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.351127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.359259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.367585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.367808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.376181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.376467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.376495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.385046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.385296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.385324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.393404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.393616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.393644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.402047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.402225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.402253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.410864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.411056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.411084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.419346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.419615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.419643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.428384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.428672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.428700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.437008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.437192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.437221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.445780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.446042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.446070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.453710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.454010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.454038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.462090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.889 [2024-07-15 07:03:09.462364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.889 [2024-07-15 07:03:09.462392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:21.889 [2024-07-15 07:03:09.470375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.890 [2024-07-15 07:03:09.470677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.890 [2024-07-15 07:03:09.470704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:21.890 [2024-07-15 07:03:09.478245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.890 [2024-07-15 07:03:09.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.890 [2024-07-15 07:03:09.478453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:21.890 [2024-07-15 07:03:09.485343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.890 [2024-07-15 07:03:09.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.890 [2024-07-15 07:03:09.485686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:21.890 [2024-07-15 07:03:09.494326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:21.890 [2024-07-15 07:03:09.494647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.890 [2024-07-15 07:03:09.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.150 [2024-07-15 07:03:09.503019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:22.150 [2024-07-15 07:03:09.503343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.150 [2024-07-15 07:03:09.503371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:22.150 [2024-07-15 07:03:09.511847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:22.150 [2024-07-15 07:03:09.512122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.150 [2024-07-15 07:03:09.512155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:22.150 [2024-07-15 07:03:09.519842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f4e90) with pdu=0x2000190fef90 00:34:22.150 [2024-07-15 07:03:09.520163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:22.150 [2024-07-15 07:03:09.520194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:22.150 00:34:22.150 Latency(us) 00:34:22.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.150 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:22.150 nvme0n1 : 2.00 3529.34 441.17 0.00 0.00 4523.36 3155.44 16019.91 00:34:22.150 =================================================================================================================== 00:34:22.150 Total : 3529.34 441.17 0.00 0.00 4523.36 3155.44 16019.91 00:34:22.150 0 00:34:22.150 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:22.150 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:22.150 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:22.150 | .driver_specific 00:34:22.150 | .nvme_error 00:34:22.150 | .status_code 00:34:22.150 | .command_transient_transport_error' 00:34:22.150 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 792927 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 792927 ']' 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 792927 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 792927 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 792927' 00:34:22.410 killing process with pid 792927 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 792927 00:34:22.410 Received shutdown signal, test time was about 2.000000 seconds 00:34:22.410 00:34:22.410 Latency(us) 00:34:22.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.410 =================================================================================================================== 00:34:22.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:22.410 07:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 792927 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 791451 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 791451 ']' 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 791451 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 791451 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 791451' 00:34:22.669 killing process with pid 791451 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 791451 00:34:22.669 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 791451 00:34:22.929 00:34:22.929 real 0m15.018s 00:34:22.929 user 0m29.809s 00:34:22.929 sys 0m4.071s 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:22.929 ************************************ 00:34:22.929 END TEST nvmf_digest_error 00:34:22.929 ************************************ 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.929 rmmod nvme_tcp 00:34:22.929 rmmod nvme_fabrics 00:34:22.929 rmmod nvme_keyring 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 791451 ']' 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 791451 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 791451 ']' 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 791451 00:34:22.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (791451) - No such process 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 791451 is not found' 00:34:22.929 Process with pid 791451 is not found 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.929 07:03:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.836 07:03:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.836 00:34:24.836 real 0m34.578s 00:34:24.836 user 1m0.896s 00:34:24.836 sys 0m9.563s 00:34:24.836 07:03:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:24.836 07:03:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.836 ************************************ 00:34:24.836 END TEST nvmf_digest 00:34:24.836 ************************************ 00:34:24.836 07:03:12 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:24.836 07:03:12 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:24.836 07:03:12 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:24.836 07:03:12 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:24.836 07:03:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:24.836 07:03:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:24.836 07:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.094 ************************************ 00:34:25.094 START TEST nvmf_bdevperf 00:34:25.094 ************************************ 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:25.094 * Looking for test storage... 00:34:25.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.094 07:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:25.095 07:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:26.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:26.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:26.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:26.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.994 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.995 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.995 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:27.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:27.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:34:27.254 00:34:27.254 --- 10.0.0.2 ping statistics --- 00:34:27.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.254 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:34:27.254 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:27.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:27.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:34:27.254 00:34:27.254 --- 10.0.0.1 ping statistics --- 00:34:27.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.255 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=795779 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 795779 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 795779 ']' 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:27.255 07:03:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.255 [2024-07-15 07:03:14.748607] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:27.255 [2024-07-15 07:03:14.748695] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.255 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.255 [2024-07-15 07:03:14.815498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:27.513 [2024-07-15 07:03:14.905443] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.513 [2024-07-15 07:03:14.905502] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.513 [2024-07-15 07:03:14.905530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.513 [2024-07-15 07:03:14.905541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.513 [2024-07-15 07:03:14.905550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.513 [2024-07-15 07:03:14.905639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.513 [2024-07-15 07:03:14.905703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.513 [2024-07-15 07:03:14.905706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 [2024-07-15 07:03:15.049293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 Malloc0 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.513 [2024-07-15 07:03:15.111524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.513 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:27.514 { 00:34:27.514 "params": { 00:34:27.514 "name": "Nvme$subsystem", 00:34:27.514 "trtype": "$TEST_TRANSPORT", 00:34:27.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:27.514 "adrfam": "ipv4", 00:34:27.514 "trsvcid": "$NVMF_PORT", 00:34:27.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:27.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:27.514 "hdgst": ${hdgst:-false}, 00:34:27.514 "ddgst": ${ddgst:-false} 00:34:27.514 }, 00:34:27.514 "method": "bdev_nvme_attach_controller" 00:34:27.514 } 00:34:27.514 EOF 00:34:27.514 )") 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:27.514 07:03:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:27.514 "params": { 00:34:27.514 "name": "Nvme1", 00:34:27.514 "trtype": "tcp", 00:34:27.514 "traddr": "10.0.0.2", 00:34:27.514 "adrfam": "ipv4", 00:34:27.514 "trsvcid": "4420", 00:34:27.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:27.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:27.514 "hdgst": false, 00:34:27.514 "ddgst": false 00:34:27.514 }, 00:34:27.514 "method": "bdev_nvme_attach_controller" 00:34:27.514 }' 00:34:27.771 [2024-07-15 07:03:15.160517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:27.772 [2024-07-15 07:03:15.160602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795830 ] 00:34:27.772 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.772 [2024-07-15 07:03:15.224329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.772 [2024-07-15 07:03:15.309903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.029 Running I/O for 1 seconds... 00:34:29.437 00:34:29.437 Latency(us) 00:34:29.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.437 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:29.437 Verification LBA range: start 0x0 length 0x4000 00:34:29.437 Nvme1n1 : 1.01 8596.12 33.58 0.00 0.00 14809.89 2827.76 17476.27 00:34:29.437 =================================================================================================================== 00:34:29.437 Total : 8596.12 33.58 0.00 0.00 14809.89 2827.76 17476.27 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=796065 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.437 { 00:34:29.437 "params": { 00:34:29.437 "name": "Nvme$subsystem", 00:34:29.437 "trtype": "$TEST_TRANSPORT", 00:34:29.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.437 "adrfam": "ipv4", 00:34:29.437 "trsvcid": "$NVMF_PORT", 00:34:29.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.437 "hdgst": ${hdgst:-false}, 00:34:29.437 "ddgst": ${ddgst:-false} 00:34:29.437 }, 00:34:29.437 "method": "bdev_nvme_attach_controller" 00:34:29.437 } 00:34:29.437 EOF 00:34:29.437 )") 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:29.437 07:03:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.437 "params": { 00:34:29.437 "name": "Nvme1", 00:34:29.437 "trtype": "tcp", 00:34:29.437 "traddr": "10.0.0.2", 00:34:29.437 "adrfam": "ipv4", 00:34:29.437 "trsvcid": "4420", 00:34:29.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.437 "hdgst": false, 00:34:29.437 "ddgst": false 00:34:29.437 }, 00:34:29.437 "method": "bdev_nvme_attach_controller" 00:34:29.437 }' 00:34:29.437 [2024-07-15 07:03:16.879947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:29.437 [2024-07-15 07:03:16.880024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796065 ] 00:34:29.437 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.437 [2024-07-15 07:03:16.940520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.437 [2024-07-15 07:03:17.028156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.002 Running I/O for 15 seconds... 00:34:32.530 07:03:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 795779 00:34:32.530 07:03:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:32.530 [2024-07-15 07:03:19.850105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.850986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.530 [2024-07-15 07:03:19.850999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.530 [2024-07-15 07:03:19.851013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-07-15 07:03:19.851026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-07-15 07:03:19.851054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.851985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.851998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.852013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.852026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.852054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.852069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.852082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.852097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.852110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.531 [2024-07-15 07:03:19.852125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-07-15 07:03:19.852138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.532 [2024-07-15 07:03:19.852910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.852983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.852998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.853011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.853039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.853054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.853067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.853081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.853094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-07-15 07:03:19.853109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-07-15 07:03:19.853122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-07-15 07:03:19.853769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.853972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.853986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-07-15 07:03:19.854258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.533 [2024-07-15 07:03:19.854273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17209a0 is same with the state(5) to be set 00:34:32.533 [2024-07-15 07:03:19.854290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:32.534 [2024-07-15 07:03:19.854302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:32.534 [2024-07-15 07:03:19.854314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:34:32.534 [2024-07-15 07:03:19.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-07-15 07:03:19.854390] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17209a0 was disconnected and freed. reset controller. 00:34:32.534 [2024-07-15 07:03:19.854468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.534 [2024-07-15 07:03:19.854493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-07-15 07:03:19.854515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.534 [2024-07-15 07:03:19.854530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-07-15 07:03:19.854545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.534 [2024-07-15 07:03:19.854559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-07-15 07:03:19.854574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.534 [2024-07-15 07:03:19.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-07-15 07:03:19.854601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.858226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.858266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.858969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.859000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.859016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.859258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.859502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.859525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.859542] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.863315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.872410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.872850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.872889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.872909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.873148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.873392] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.873415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.873430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.877016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.886312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.886811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.886854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.886871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.887132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.887376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.887399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.887414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.891025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.900310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.900753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.900784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.900801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.901051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.901294] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.901319] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.901333] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.904912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.914202] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.914626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.914658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.914676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.914925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.915168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.915192] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.915207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.918780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.928081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.928506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.928537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.928555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.928802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.929057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.929082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.929096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.932676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.941985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.942417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.942448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.942466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.534 [2024-07-15 07:03:19.942704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.534 [2024-07-15 07:03:19.942959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.534 [2024-07-15 07:03:19.942984] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.534 [2024-07-15 07:03:19.942999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.534 [2024-07-15 07:03:19.946573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.534 [2024-07-15 07:03:19.955865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.534 [2024-07-15 07:03:19.956290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.534 [2024-07-15 07:03:19.956321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.534 [2024-07-15 07:03:19.956339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:19.956577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:19.956819] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:19.956843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:19.956858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:19.960444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:19.969744] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:19.970152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:19.970183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:19.970201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:19.970439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:19.970682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:19.970706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:19.970726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:19.974317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:19.983611] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:19.984030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:19.984061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:19.984079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:19.984317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:19.984560] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:19.984584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:19.984599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:19.988188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:19.997486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:19.997909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:19.997940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:19.997958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:19.998197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:19.998440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:19.998464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:19.998479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:20.002069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:20.011371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:20.011797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:20.011828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:20.011846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:20.012095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:20.012340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:20.012363] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:20.012378] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:20.015967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:20.025270] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:20.025676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:20.025716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:20.025735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:20.025995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:20.026240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:20.026265] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:20.026280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:20.029857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.535 [2024-07-15 07:03:20.039235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.535 [2024-07-15 07:03:20.039671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.535 [2024-07-15 07:03:20.039704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.535 [2024-07-15 07:03:20.039723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.535 [2024-07-15 07:03:20.039971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.535 [2024-07-15 07:03:20.040216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.535 [2024-07-15 07:03:20.040240] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.535 [2024-07-15 07:03:20.040256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.535 [2024-07-15 07:03:20.043839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.053141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.053543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.053574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.053593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.053832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.054086] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.054111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.054126] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.057706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.067023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.067399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.067430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.067447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.067685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.067946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.067970] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.067985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.071564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.080870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.081431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.081463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.081481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.081719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.081975] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.082000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.082014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.085590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.094908] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.095331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.095362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.095379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.095617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.095860] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.095898] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.095914] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.099493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.108796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.109226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.109257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.109274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.109513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.109756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.109780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.109794] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.113391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.122695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.123084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.123116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.123134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.123372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.123614] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.123637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.123652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.536 [2024-07-15 07:03:20.127238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.536 [2024-07-15 07:03:20.136746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.536 [2024-07-15 07:03:20.137157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.536 [2024-07-15 07:03:20.137188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.536 [2024-07-15 07:03:20.137206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.536 [2024-07-15 07:03:20.137444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.536 [2024-07-15 07:03:20.137687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.536 [2024-07-15 07:03:20.137711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.536 [2024-07-15 07:03:20.137726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.794 [2024-07-15 07:03:20.141323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.794 [2024-07-15 07:03:20.150638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.794 [2024-07-15 07:03:20.151023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.794 [2024-07-15 07:03:20.151054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.794 [2024-07-15 07:03:20.151071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.794 [2024-07-15 07:03:20.151309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.794 [2024-07-15 07:03:20.151552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.794 [2024-07-15 07:03:20.151576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.794 [2024-07-15 07:03:20.151591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.794 [2024-07-15 07:03:20.155181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.794 [2024-07-15 07:03:20.164692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.794 [2024-07-15 07:03:20.165098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.165129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.165152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.165391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.165635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.165659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.165673] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.169264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.178563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.178989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.179020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.179038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.179276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.179519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.179542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.179558] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.183144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.192451] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.192869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.192909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.192928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.193166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.193409] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.193433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.193447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.197033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.206349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.206755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.206787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.206804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.207051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.207295] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.207324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.207340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.210920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.220212] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.220637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.220668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.220685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.220934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.221178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.221202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.221217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.224794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.234103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.234502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.234532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.234550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.234788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.235041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.235065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.235080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.238658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.247963] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.248363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.248393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.248411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.248650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.248901] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.248925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.248940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.252521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.261818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.262237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.262268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.262286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.262524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.262767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.262791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.262805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.266395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.275694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.276097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.276128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.795 [2024-07-15 07:03:20.276145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.795 [2024-07-15 07:03:20.276384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.795 [2024-07-15 07:03:20.276627] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.795 [2024-07-15 07:03:20.276651] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.795 [2024-07-15 07:03:20.276666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.795 [2024-07-15 07:03:20.280268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.795 [2024-07-15 07:03:20.289569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.795 [2024-07-15 07:03:20.289969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.795 [2024-07-15 07:03:20.290000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.290018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.290256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.290499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.290523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.290538] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.294124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.303429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.303818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.303849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.303866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.304120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.304363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.304387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.304402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.307984] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.317268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.317692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.317723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.317740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.317991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.318234] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.318258] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.318272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.321846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.331133] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.331543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.331575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.331593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.331832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.332084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.332109] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.332124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.335697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.344985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.345381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.345412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.345429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.345667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.345920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.345944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.345965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.349538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.358822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.359231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.359263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.359281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.359519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.359763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.359786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.359801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.363385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.372671] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.373173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.373226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.373244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.373481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.373724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.373748] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.373763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.377346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.386628] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.387235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.387253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.387490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.387734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.387757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.387772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.391357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.796 [2024-07-15 07:03:20.400633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.796 [2024-07-15 07:03:20.401194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.796 [2024-07-15 07:03:20.401253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:32.796 [2024-07-15 07:03:20.401270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:32.796 [2024-07-15 07:03:20.401508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:32.796 [2024-07-15 07:03:20.401751] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.796 [2024-07-15 07:03:20.401775] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.796 [2024-07-15 07:03:20.401789] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.796 [2024-07-15 07:03:20.405381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.414694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.415137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.415168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.415186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.415424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.415666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.415690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.415705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.419296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.428587] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.429008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.429039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.429058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.429296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.429539] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.429563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.429577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.433164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.442483] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.442981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.443012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.443030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.443274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.443517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.443541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.443556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.447139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.456423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.456861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.456899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.456917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.457156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.457399] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.457423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.457438] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.461025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.470322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.470725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.470756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.470774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.471022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.471266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.471289] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.471305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.474890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.484203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.484599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.484630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.484647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.484895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.485139] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.485163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.485183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.488756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.498052] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.498450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.498480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.498498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.498736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.498988] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.499012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.499026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.502598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.512096] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.512511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.512542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.512559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.512797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.513050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.513074] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.513089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.516659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.525944] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.526359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.526389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.526407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.526644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.526898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.526922] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.526937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.058 [2024-07-15 07:03:20.530529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.058 [2024-07-15 07:03:20.539822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.058 [2024-07-15 07:03:20.540238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.058 [2024-07-15 07:03:20.540276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.058 [2024-07-15 07:03:20.540294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.058 [2024-07-15 07:03:20.540533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.058 [2024-07-15 07:03:20.540776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.058 [2024-07-15 07:03:20.540800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.058 [2024-07-15 07:03:20.540815] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.544406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.553704] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.554091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.554122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.554140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.554378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.554621] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.554644] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.554659] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.558247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.567750] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.568180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.568211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.568229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.568466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.568709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.568733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.568747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.572334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.581633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.582087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.582118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.582136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.582374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.582623] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.582647] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.582662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.586249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.595543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.595919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.595950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.595968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.596206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.596449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.596473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.596488] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.600081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.609599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.610009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.610040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.610058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.610296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.610539] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.610563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.610578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.614170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.623474] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.623910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.623941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.623958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.624196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.624439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.624462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.624477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.628078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.637384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.637809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.637840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.637857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.638105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.638349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.638372] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.638387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.641974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.651283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.651716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.651747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.651764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.652013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.652256] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.652280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.652295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.655872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.059 [2024-07-15 07:03:20.665177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.059 [2024-07-15 07:03:20.665599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.059 [2024-07-15 07:03:20.665630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.059 [2024-07-15 07:03:20.665647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.059 [2024-07-15 07:03:20.665898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.059 [2024-07-15 07:03:20.666142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.059 [2024-07-15 07:03:20.666166] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.059 [2024-07-15 07:03:20.666181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.059 [2024-07-15 07:03:20.669770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.320 [2024-07-15 07:03:20.679097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.320 [2024-07-15 07:03:20.679504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.320 [2024-07-15 07:03:20.679535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.320 [2024-07-15 07:03:20.679559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.320 [2024-07-15 07:03:20.679800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.320 [2024-07-15 07:03:20.680056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.320 [2024-07-15 07:03:20.680081] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.320 [2024-07-15 07:03:20.680095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.320 [2024-07-15 07:03:20.683675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.320 [2024-07-15 07:03:20.692990] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.320 [2024-07-15 07:03:20.693409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.320 [2024-07-15 07:03:20.693440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.320 [2024-07-15 07:03:20.693457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.320 [2024-07-15 07:03:20.693695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.693950] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.693975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.693989] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.697568] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.706871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.707273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.707304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.707321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.707559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.707802] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.707825] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.707840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.711430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.720731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.721161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.721192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.721210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.721448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.721692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.721721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.721737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.725326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.734618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.734997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.735028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.735046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.735284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.735527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.735550] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.735565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.739154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.748669] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.749087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.749118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.749136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.749374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.749617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.749640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.749655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.753247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.762563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.762963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.762995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.763012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.763250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.763493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.763518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.763532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.767118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.776430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.776852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.776890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.776909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.777154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.777397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.777422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.777436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.781026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.790329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.790762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.790793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.790812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.791060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.791303] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.791327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.791341] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.794936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.804275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.804697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.804728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.804746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.804996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.805240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.805263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.805278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.808863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.818198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.818591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.818622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.818640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.818895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.819140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.819163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.819178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.822758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.832077] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.832629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.832689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.832707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.321 [2024-07-15 07:03:20.832956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.321 [2024-07-15 07:03:20.833200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.321 [2024-07-15 07:03:20.833224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.321 [2024-07-15 07:03:20.833238] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.321 [2024-07-15 07:03:20.836820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.321 [2024-07-15 07:03:20.845941] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.321 [2024-07-15 07:03:20.846340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.321 [2024-07-15 07:03:20.846371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.321 [2024-07-15 07:03:20.846388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.846626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.846869] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.846905] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.846921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.850548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.859868] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.860426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.860478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.860495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.860733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.860988] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.861013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.861033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.864619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.873733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.874137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.874168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.874185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.874423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.874666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.874690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.874705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.878504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.887617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.888061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.888092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.888110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.888348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.888592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.888615] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.888630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.892236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.901554] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.901953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.901985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.902003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.902242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.902484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.902508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.902523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.906118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.915460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.915897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.915929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.915947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.916185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.916428] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.916453] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.916468] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.322 [2024-07-15 07:03:20.920058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.322 [2024-07-15 07:03:20.929368] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.322 [2024-07-15 07:03:20.929792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.322 [2024-07-15 07:03:20.929823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.322 [2024-07-15 07:03:20.929840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.322 [2024-07-15 07:03:20.930090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.322 [2024-07-15 07:03:20.930334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.322 [2024-07-15 07:03:20.930358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.322 [2024-07-15 07:03:20.930373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:20.933971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:20.943292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:20.943721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:20.943752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:20.943770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:20.944019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:20.944263] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:20.944287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:20.944302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:20.947894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:20.957219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:20.957769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:20.957819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:20.957837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:20.958092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:20.958337] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:20.958361] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:20.958376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:20.961968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:20.971279] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:20.971674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:20.971705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:20.971722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:20.971971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:20.972215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:20.972238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:20.972253] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:20.975826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:20.985123] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:20.985522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:20.985554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:20.985571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:20.985810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:20.986063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:20.986087] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:20.986102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:20.989680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:20.998989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:20.999408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:20.999438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:20.999456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:20.999694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:20.999949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:20.999973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:20.999994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:21.003571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:21.012865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:21.013269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:21.013301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:21.013319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:21.013557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:21.013800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:21.013824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:21.013839] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:21.017426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:21.026718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:21.027145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:21.027176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:21.027193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:21.027431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:21.027674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:21.027697] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:21.027712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:21.031297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:21.040592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:21.040988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:21.041020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:21.041037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:21.041276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:21.041518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:21.041541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:21.041556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:21.045142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:21.054435] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:21.054833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:21.054869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:21.054898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:21.055138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:21.055382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:21.055405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:21.055420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.584 [2024-07-15 07:03:21.059010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.584 [2024-07-15 07:03:21.068323] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.584 [2024-07-15 07:03:21.068756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.584 [2024-07-15 07:03:21.068787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.584 [2024-07-15 07:03:21.068805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.584 [2024-07-15 07:03:21.069054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.584 [2024-07-15 07:03:21.069298] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.584 [2024-07-15 07:03:21.069322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.584 [2024-07-15 07:03:21.069339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.072935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.082258] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.082687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.082717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.082734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.082983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.083226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.083250] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.083265] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.086846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.096174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.096592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.096622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.096640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.096888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.097138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.097162] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.097177] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.100758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.110164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.110664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.110718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.110736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.110985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.111229] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.111253] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.111268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.114850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.124175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.124725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.124780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.124797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.125046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.125290] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.125313] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.125328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.128917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.138216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.138623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.138654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.138671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.138921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.139165] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.139189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.139204] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.142791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.152095] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.152530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.152561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.152578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.152816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.153071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.153095] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.153110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.156689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.166000] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.166405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.166436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.166454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.166693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.166947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.166972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.166986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.170568] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.179870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.180271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.180302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.180319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.180557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.180800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.180824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.180839] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.585 [2024-07-15 07:03:21.184434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.585 [2024-07-15 07:03:21.193748] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.585 [2024-07-15 07:03:21.194177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.585 [2024-07-15 07:03:21.194209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.585 [2024-07-15 07:03:21.194233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.585 [2024-07-15 07:03:21.194473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.585 [2024-07-15 07:03:21.194715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.585 [2024-07-15 07:03:21.194739] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.585 [2024-07-15 07:03:21.194754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.846 [2024-07-15 07:03:21.198355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.846 [2024-07-15 07:03:21.207668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.846 [2024-07-15 07:03:21.208110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.846 [2024-07-15 07:03:21.208141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.846 [2024-07-15 07:03:21.208158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.846 [2024-07-15 07:03:21.208397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.846 [2024-07-15 07:03:21.208640] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.846 [2024-07-15 07:03:21.208664] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.846 [2024-07-15 07:03:21.208679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.846 [2024-07-15 07:03:21.212273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.846 [2024-07-15 07:03:21.221577] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.846 [2024-07-15 07:03:21.221987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.846 [2024-07-15 07:03:21.222018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.846 [2024-07-15 07:03:21.222036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.846 [2024-07-15 07:03:21.222275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.222517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.222541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.222555] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.226144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.235448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.235888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.235919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.235936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.236174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.236416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.236445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.236461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.240046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.249343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.249744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.249775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.249792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.250042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.250285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.250309] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.250324] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.253906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.263196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.263613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.263644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.263661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.263910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.264154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.264178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.264193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.267773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.277098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.277495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.277526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.277544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.277782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.278037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.278061] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.278076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.281657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.290960] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.291333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.291364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.291381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.291619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.291863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.291899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.291915] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.295493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.305009] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.305430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.305461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.305478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.305716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.305971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.305995] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.306010] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.309588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.318892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.319292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.319322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.319340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.319577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.319821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.319845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.319860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.323448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.847 [2024-07-15 07:03:21.332745] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.847 [2024-07-15 07:03:21.333153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.847 [2024-07-15 07:03:21.333184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.847 [2024-07-15 07:03:21.333202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.847 [2024-07-15 07:03:21.333449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.847 [2024-07-15 07:03:21.333692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.847 [2024-07-15 07:03:21.333716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.847 [2024-07-15 07:03:21.333731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.847 [2024-07-15 07:03:21.337316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.346606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.347012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.347043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.347061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.347299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.347542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.347565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.347580] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.351186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.360497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.360903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.360939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.360957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.361195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.361439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.361463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.361478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.365063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.374364] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.374761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.374792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.374810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.375056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.375300] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.375324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.375344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.378928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.388214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.388634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.388665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.388683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.388931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.389174] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.389198] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.389213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.392793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.402092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.402566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.402596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.402614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.402851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.403102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.403126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.403141] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.406714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.416025] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.416454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.416485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.416502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.416741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.416994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.417018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.417033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.420610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.429918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.430333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.430364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.430381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.430619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.430862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.430894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.430910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.434493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.443798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.444233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.444264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.444282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.848 [2024-07-15 07:03:21.444520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.848 [2024-07-15 07:03:21.444763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.848 [2024-07-15 07:03:21.444786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.848 [2024-07-15 07:03:21.444800] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:33.848 [2024-07-15 07:03:21.448395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:33.848 [2024-07-15 07:03:21.457705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:33.848 [2024-07-15 07:03:21.458140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.848 [2024-07-15 07:03:21.458171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:33.848 [2024-07-15 07:03:21.458188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:33.849 [2024-07-15 07:03:21.458426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:33.849 [2024-07-15 07:03:21.458668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:33.849 [2024-07-15 07:03:21.458692] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:33.849 [2024-07-15 07:03:21.458707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.462300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.471612] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.472019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.472049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.472068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.472306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.472556] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.472579] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.472594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.476187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.485504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.485902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.485933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.485952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.486190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.486433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.486457] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.486472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.490092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.499401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.499774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.499805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.499823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.500072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.500316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.500340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.500355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.503942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.513446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.513869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.513907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.513924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.514163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.514406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.514430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.514445] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.518037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.527326] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.527725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.527755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.527773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.528021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.528265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.528288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.528303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.531884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.541172] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.109 [2024-07-15 07:03:21.541600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.109 [2024-07-15 07:03:21.541631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.109 [2024-07-15 07:03:21.541649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.109 [2024-07-15 07:03:21.541898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.109 [2024-07-15 07:03:21.542142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.109 [2024-07-15 07:03:21.542166] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.109 [2024-07-15 07:03:21.542181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.109 [2024-07-15 07:03:21.545758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.109 [2024-07-15 07:03:21.555069] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.555464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.555495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.555513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.555750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.556006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.556030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.556045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.559623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.568931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.569350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.569380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.569403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.569642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.569895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.569920] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.569935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.573513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.582824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.583253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.583284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.583301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.583539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.583782] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.583806] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.583821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.587410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.596735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.597177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.597208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.597225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.597463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.597706] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.597730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.597744] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.601331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.610636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.611041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.611072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.611090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.611328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.611576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.611600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.611615] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.615210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.624547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.624970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.625001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.625019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.625257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.625500] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.625524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.625539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.629127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.638442] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.638863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.638901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.638919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.639157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.639400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.639424] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.639439] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.643029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.652332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.652763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.652794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.652811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.653060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.653304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.110 [2024-07-15 07:03:21.653328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.110 [2024-07-15 07:03:21.653342] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.110 [2024-07-15 07:03:21.656930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.110 [2024-07-15 07:03:21.666219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.110 [2024-07-15 07:03:21.666619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.110 [2024-07-15 07:03:21.666650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.110 [2024-07-15 07:03:21.666667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.110 [2024-07-15 07:03:21.666916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.110 [2024-07-15 07:03:21.667160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.111 [2024-07-15 07:03:21.667183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.111 [2024-07-15 07:03:21.667198] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.111 [2024-07-15 07:03:21.670776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.111 [2024-07-15 07:03:21.680076] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.111 [2024-07-15 07:03:21.680508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.111 [2024-07-15 07:03:21.680539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.111 [2024-07-15 07:03:21.680557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.111 [2024-07-15 07:03:21.680795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.111 [2024-07-15 07:03:21.681076] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.111 [2024-07-15 07:03:21.681110] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.111 [2024-07-15 07:03:21.681126] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.111 [2024-07-15 07:03:21.684709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.111 [2024-07-15 07:03:21.694019] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.111 [2024-07-15 07:03:21.694415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.111 [2024-07-15 07:03:21.694446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.111 [2024-07-15 07:03:21.694463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.111 [2024-07-15 07:03:21.694702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.111 [2024-07-15 07:03:21.694956] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.111 [2024-07-15 07:03:21.694980] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.111 [2024-07-15 07:03:21.694996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.111 [2024-07-15 07:03:21.698577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.111 [2024-07-15 07:03:21.707874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.111 [2024-07-15 07:03:21.708301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.111 [2024-07-15 07:03:21.708332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.111 [2024-07-15 07:03:21.708355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.111 [2024-07-15 07:03:21.708594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.111 [2024-07-15 07:03:21.708837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.111 [2024-07-15 07:03:21.708861] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.111 [2024-07-15 07:03:21.708885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.111 [2024-07-15 07:03:21.712464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.111 [2024-07-15 07:03:21.721761] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.722191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.722223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.722241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.722479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.722722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.722745] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.722760] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.726350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.735641] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.736070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.736101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.736118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.736356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.736599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.736623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.736638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.740224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.749542] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.749961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.749992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.750010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.750248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.750491] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.750521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.750536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.754118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.763409] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.763826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.763857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.763874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.764123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.764366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.764389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.764404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.767987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.777268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.777643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.777674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.777692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.777940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.778184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.778207] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.778222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.781801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.791313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.791753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.791783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.791801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.792058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.792301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.792324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.792339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.795926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.371 [2024-07-15 07:03:21.805217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.371 [2024-07-15 07:03:21.805600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-15 07:03:21.805632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.371 [2024-07-15 07:03:21.805650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.371 [2024-07-15 07:03:21.805905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.371 [2024-07-15 07:03:21.806157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.371 [2024-07-15 07:03:21.806183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.371 [2024-07-15 07:03:21.806198] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.371 [2024-07-15 07:03:21.809776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.819080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.819452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.819483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.819500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.819738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.819993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.820017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.820032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.823609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.833118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.833539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.833570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.833588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.833826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.834080] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.834104] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.834119] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.837700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.847003] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.847422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.847452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.847470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.847714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.847968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.847993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.848008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.851584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.860888] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.861309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.861339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.861357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.861595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.861838] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.861862] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.861886] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.865466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.874764] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.875171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.875201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.875219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.875456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.875700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.875724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.875739] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.879332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.888631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.889066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.889097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.889114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.889352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.889595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.889619] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.889640] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.893227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.902666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.903103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.903135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.903153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.903391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.903634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.903657] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.903672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.907259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.916551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.916971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.917003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.917021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.917259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.917503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.917527] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.917541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.921131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.930435] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.372 [2024-07-15 07:03:21.930861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-15 07:03:21.930900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.372 [2024-07-15 07:03:21.930919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.372 [2024-07-15 07:03:21.931157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.372 [2024-07-15 07:03:21.931400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.372 [2024-07-15 07:03:21.931424] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.372 [2024-07-15 07:03:21.931439] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.372 [2024-07-15 07:03:21.935020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.372 [2024-07-15 07:03:21.944326] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.373 [2024-07-15 07:03:21.944725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-15 07:03:21.944762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.373 [2024-07-15 07:03:21.944780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.373 [2024-07-15 07:03:21.945027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.373 [2024-07-15 07:03:21.945270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.373 [2024-07-15 07:03:21.945294] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.373 [2024-07-15 07:03:21.945309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.373 [2024-07-15 07:03:21.948915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.373 [2024-07-15 07:03:21.958215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.373 [2024-07-15 07:03:21.958640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-15 07:03:21.958671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.373 [2024-07-15 07:03:21.958689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.373 [2024-07-15 07:03:21.958937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.373 [2024-07-15 07:03:21.959180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.373 [2024-07-15 07:03:21.959204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.373 [2024-07-15 07:03:21.959219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.373 [2024-07-15 07:03:21.962793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.373 [2024-07-15 07:03:21.972116] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.373 [2024-07-15 07:03:21.972584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-15 07:03:21.972614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.373 [2024-07-15 07:03:21.972632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.373 [2024-07-15 07:03:21.972870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.373 [2024-07-15 07:03:21.973122] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.373 [2024-07-15 07:03:21.973146] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.373 [2024-07-15 07:03:21.973161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.373 [2024-07-15 07:03:21.976740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.631 [2024-07-15 07:03:21.986041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.631 [2024-07-15 07:03:21.986469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.631 [2024-07-15 07:03:21.986519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.631 [2024-07-15 07:03:21.986537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.631 [2024-07-15 07:03:21.986775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.631 [2024-07-15 07:03:21.987035] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.631 [2024-07-15 07:03:21.987060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.631 [2024-07-15 07:03:21.987074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.631 [2024-07-15 07:03:21.990650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.631 [2024-07-15 07:03:21.999958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.631 [2024-07-15 07:03:22.000379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.631 [2024-07-15 07:03:22.000409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.631 [2024-07-15 07:03:22.000427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.631 [2024-07-15 07:03:22.000664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.631 [2024-07-15 07:03:22.000917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.631 [2024-07-15 07:03:22.000942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.631 [2024-07-15 07:03:22.000957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.631 [2024-07-15 07:03:22.004531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.631 [2024-07-15 07:03:22.013820] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.631 [2024-07-15 07:03:22.014233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.631 [2024-07-15 07:03:22.014265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.631 [2024-07-15 07:03:22.014283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.631 [2024-07-15 07:03:22.014521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.631 [2024-07-15 07:03:22.014764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.631 [2024-07-15 07:03:22.014788] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.631 [2024-07-15 07:03:22.014803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.631 [2024-07-15 07:03:22.018388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.631 [2024-07-15 07:03:22.027686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.028071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.028103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.028121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.028358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.028601] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.028624] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.028639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.032235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.041737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.042120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.042151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.042169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.042409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.042651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.042675] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.042690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.046283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.055587] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.055993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.056024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.056042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.056280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.056523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.056547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.056561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.060150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.069546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.069948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.069980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.069998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.070237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.070479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.070503] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.070518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.074103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.083408] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.083831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.083862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.083895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.084136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.084380] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.084403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.084418] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.088006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.097314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.097718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.097749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.097767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.098016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.098259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.098283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.098298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.101887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.111187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.111591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.111622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.111640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.111887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.112131] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.112155] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.112170] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.115748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.125057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.125540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.125588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.125605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.125843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.126094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.126125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.126141] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.129721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.139026] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.139458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.139489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.139507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.139745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.139999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.140024] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.140038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.143618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.152923] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.153329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.153360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.153377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.153614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.153857] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.632 [2024-07-15 07:03:22.153890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.632 [2024-07-15 07:03:22.153907] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.632 [2024-07-15 07:03:22.157488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.632 [2024-07-15 07:03:22.166788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.632 [2024-07-15 07:03:22.167193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.632 [2024-07-15 07:03:22.167224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.632 [2024-07-15 07:03:22.167242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.632 [2024-07-15 07:03:22.167480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.632 [2024-07-15 07:03:22.167723] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.167746] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.167761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.171351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.633 [2024-07-15 07:03:22.180642] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.633 [2024-07-15 07:03:22.181067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.633 [2024-07-15 07:03:22.181098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.633 [2024-07-15 07:03:22.181116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.633 [2024-07-15 07:03:22.181354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.633 [2024-07-15 07:03:22.181596] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.181620] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.181635] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.185225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.633 [2024-07-15 07:03:22.194526] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.633 [2024-07-15 07:03:22.194927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.633 [2024-07-15 07:03:22.194959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.633 [2024-07-15 07:03:22.194977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.633 [2024-07-15 07:03:22.195215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.633 [2024-07-15 07:03:22.195458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.195482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.195497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.199082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.633 [2024-07-15 07:03:22.208378] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.633 [2024-07-15 07:03:22.208813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.633 [2024-07-15 07:03:22.208844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.633 [2024-07-15 07:03:22.208862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.633 [2024-07-15 07:03:22.209109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.633 [2024-07-15 07:03:22.209352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.209376] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.209391] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.212979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.633 [2024-07-15 07:03:22.222277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.633 [2024-07-15 07:03:22.222706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.633 [2024-07-15 07:03:22.222738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.633 [2024-07-15 07:03:22.222760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.633 [2024-07-15 07:03:22.223014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.633 [2024-07-15 07:03:22.223258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.223281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.223296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.226882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.633 [2024-07-15 07:03:22.236177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.633 [2024-07-15 07:03:22.236581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.633 [2024-07-15 07:03:22.236612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.633 [2024-07-15 07:03:22.236629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.633 [2024-07-15 07:03:22.236867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.633 [2024-07-15 07:03:22.237121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.633 [2024-07-15 07:03:22.237145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.633 [2024-07-15 07:03:22.237160] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.633 [2024-07-15 07:03:22.240738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.891 [2024-07-15 07:03:22.250045] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.891 [2024-07-15 07:03:22.250475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.891 [2024-07-15 07:03:22.250505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.891 [2024-07-15 07:03:22.250523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.891 [2024-07-15 07:03:22.250761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.891 [2024-07-15 07:03:22.251016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.891 [2024-07-15 07:03:22.251040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.891 [2024-07-15 07:03:22.251056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.891 [2024-07-15 07:03:22.254634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.891 [2024-07-15 07:03:22.263935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.891 [2024-07-15 07:03:22.264355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.891 [2024-07-15 07:03:22.264386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.891 [2024-07-15 07:03:22.264404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.891 [2024-07-15 07:03:22.264641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.891 [2024-07-15 07:03:22.264896] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.891 [2024-07-15 07:03:22.264925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.891 [2024-07-15 07:03:22.264941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.891 [2024-07-15 07:03:22.268517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.891 [2024-07-15 07:03:22.277803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.891 [2024-07-15 07:03:22.278214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.891 [2024-07-15 07:03:22.278245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.891 [2024-07-15 07:03:22.278263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.891 [2024-07-15 07:03:22.278501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.891 [2024-07-15 07:03:22.278744] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.891 [2024-07-15 07:03:22.278768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.891 [2024-07-15 07:03:22.278783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.891 [2024-07-15 07:03:22.282373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.891 [2024-07-15 07:03:22.291669] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.891 [2024-07-15 07:03:22.292106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.891 [2024-07-15 07:03:22.292137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.891 [2024-07-15 07:03:22.292155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.891 [2024-07-15 07:03:22.292393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.891 [2024-07-15 07:03:22.292636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.891 [2024-07-15 07:03:22.292660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.891 [2024-07-15 07:03:22.292674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.891 [2024-07-15 07:03:22.296264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.891 [2024-07-15 07:03:22.305557] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.891 [2024-07-15 07:03:22.305986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.891 [2024-07-15 07:03:22.306017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.891 [2024-07-15 07:03:22.306035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.891 [2024-07-15 07:03:22.306273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.891 [2024-07-15 07:03:22.306516] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.891 [2024-07-15 07:03:22.306540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.891 [2024-07-15 07:03:22.306555] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.310135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.319418] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.319848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.319886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.319905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.320143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.320386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.320410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.320424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.324008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.333295] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.333686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.333716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.333734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.333983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.334226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.334249] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.334264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.337837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.347340] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.347742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.347773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.347790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.348038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.348282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.348305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.348320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.351901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.361181] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.361575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.361606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.361623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.361866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.362120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.362144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.362159] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.365734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.375026] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.375496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.375526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.375543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.375781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.376033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.376057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.376073] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.379645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.388940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.389495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.389546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.389564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.389802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.390056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.390080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.390095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.393676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.402966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.403396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.403427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.403444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.403682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.403937] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.403961] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.403981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.407558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.416841] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.417357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.417407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.417424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.417662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.417916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.417940] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.417955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.421529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.430813] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.431240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.431271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.431289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.431526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.431769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.431793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.431808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.435393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.444677] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.445109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.445139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.445157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.892 [2024-07-15 07:03:22.445394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.892 [2024-07-15 07:03:22.445637] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.892 [2024-07-15 07:03:22.445660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.892 [2024-07-15 07:03:22.445676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.892 [2024-07-15 07:03:22.449258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.892 [2024-07-15 07:03:22.458545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.892 [2024-07-15 07:03:22.458950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.892 [2024-07-15 07:03:22.458987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.892 [2024-07-15 07:03:22.459005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.893 [2024-07-15 07:03:22.459244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.893 [2024-07-15 07:03:22.459487] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.893 [2024-07-15 07:03:22.459510] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.893 [2024-07-15 07:03:22.459525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.893 [2024-07-15 07:03:22.463108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.893 [2024-07-15 07:03:22.472407] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.893 [2024-07-15 07:03:22.472778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.893 [2024-07-15 07:03:22.472809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.893 [2024-07-15 07:03:22.472827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.893 [2024-07-15 07:03:22.473078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.893 [2024-07-15 07:03:22.473323] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.893 [2024-07-15 07:03:22.473346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.893 [2024-07-15 07:03:22.473361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.893 [2024-07-15 07:03:22.476949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.893 [2024-07-15 07:03:22.486472] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.893 [2024-07-15 07:03:22.486872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.893 [2024-07-15 07:03:22.486909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.893 [2024-07-15 07:03:22.486927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.893 [2024-07-15 07:03:22.487165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.893 [2024-07-15 07:03:22.487407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.893 [2024-07-15 07:03:22.487431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.893 [2024-07-15 07:03:22.487446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.893 [2024-07-15 07:03:22.491034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.893 [2024-07-15 07:03:22.500338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.893 [2024-07-15 07:03:22.500767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.893 [2024-07-15 07:03:22.500798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:34.893 [2024-07-15 07:03:22.500815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:34.893 [2024-07-15 07:03:22.501062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:34.893 [2024-07-15 07:03:22.501311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.893 [2024-07-15 07:03:22.501335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.893 [2024-07-15 07:03:22.501351] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.893 [2024-07-15 07:03:22.504934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.514254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.514651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.514682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.152 [2024-07-15 07:03:22.514700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.152 [2024-07-15 07:03:22.514949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.152 [2024-07-15 07:03:22.515192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.152 [2024-07-15 07:03:22.515216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.152 [2024-07-15 07:03:22.515231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.152 [2024-07-15 07:03:22.518804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.528098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.528472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.528504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.152 [2024-07-15 07:03:22.528521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.152 [2024-07-15 07:03:22.528760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.152 [2024-07-15 07:03:22.529013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.152 [2024-07-15 07:03:22.529038] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.152 [2024-07-15 07:03:22.529052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.152 [2024-07-15 07:03:22.532630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.542132] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.542531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.542562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.152 [2024-07-15 07:03:22.542579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.152 [2024-07-15 07:03:22.542817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.152 [2024-07-15 07:03:22.543069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.152 [2024-07-15 07:03:22.543093] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.152 [2024-07-15 07:03:22.543108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.152 [2024-07-15 07:03:22.546690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.555984] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.556385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.556416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.152 [2024-07-15 07:03:22.556433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.152 [2024-07-15 07:03:22.556671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.152 [2024-07-15 07:03:22.556925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.152 [2024-07-15 07:03:22.556949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.152 [2024-07-15 07:03:22.556964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.152 [2024-07-15 07:03:22.560538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.569840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.570277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.570308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.152 [2024-07-15 07:03:22.570326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.152 [2024-07-15 07:03:22.570563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.152 [2024-07-15 07:03:22.570806] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.152 [2024-07-15 07:03:22.570829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.152 [2024-07-15 07:03:22.570844] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.152 [2024-07-15 07:03:22.574425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.152 [2024-07-15 07:03:22.583713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.152 [2024-07-15 07:03:22.584121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 07:03:22.584152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.584170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.584408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.584650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.584674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.584688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.588271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.597569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.597989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.598021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.598044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.598283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.598527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.598550] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.598565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.602150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.611446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.611923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.611955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.611973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.612211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.612454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.612478] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.612493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.616081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.625375] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.625798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.625828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.625845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.626092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.626336] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.626359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.626374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.629962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.639262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.639669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.639700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.639717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.639967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.640211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.640241] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.640256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.643830] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.653121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.653554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.653584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.653601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.653839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.654093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.654117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.654132] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.657707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.667006] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.667404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.667435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.667453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.667691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.667945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.667969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.667984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.671563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.680856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.681282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.681312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.681330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.681567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.681811] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.681834] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.681849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.685438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.694738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.695140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.695171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.695190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.695428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.695670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.695694] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.695709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.699291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.708779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.709160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.709191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.709209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.709447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.709690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.709714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.709728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.713312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.722802] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.723213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 07:03:22.723244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.153 [2024-07-15 07:03:22.723261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.153 [2024-07-15 07:03:22.723499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.153 [2024-07-15 07:03:22.723742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.153 [2024-07-15 07:03:22.723766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.153 [2024-07-15 07:03:22.723780] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.153 [2024-07-15 07:03:22.727364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.153 [2024-07-15 07:03:22.736856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.153 [2024-07-15 07:03:22.737257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 07:03:22.737288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.154 [2024-07-15 07:03:22.737305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.154 [2024-07-15 07:03:22.737552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.154 [2024-07-15 07:03:22.737795] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.154 [2024-07-15 07:03:22.737819] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.154 [2024-07-15 07:03:22.737834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.154 [2024-07-15 07:03:22.741421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.154 [2024-07-15 07:03:22.750710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.154 [2024-07-15 07:03:22.751144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 07:03:22.751174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.154 [2024-07-15 07:03:22.751192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.154 [2024-07-15 07:03:22.751430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.154 [2024-07-15 07:03:22.751674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.154 [2024-07-15 07:03:22.751698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.154 [2024-07-15 07:03:22.751712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.154 [2024-07-15 07:03:22.755301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.154 [2024-07-15 07:03:22.764598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.154 [2024-07-15 07:03:22.765023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 07:03:22.765054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.154 [2024-07-15 07:03:22.765071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.154 [2024-07-15 07:03:22.765310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.154 [2024-07-15 07:03:22.765553] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.154 [2024-07-15 07:03:22.765576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.154 [2024-07-15 07:03:22.765591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.412 [2024-07-15 07:03:22.769181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.412 [2024-07-15 07:03:22.778479] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.412 [2024-07-15 07:03:22.778874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.412 [2024-07-15 07:03:22.778911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.412 [2024-07-15 07:03:22.778929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.412 [2024-07-15 07:03:22.779168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.412 [2024-07-15 07:03:22.779411] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.412 [2024-07-15 07:03:22.779434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.412 [2024-07-15 07:03:22.779454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.412 [2024-07-15 07:03:22.783044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.412 [2024-07-15 07:03:22.792345] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.412 [2024-07-15 07:03:22.792736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.412 [2024-07-15 07:03:22.792767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.412 [2024-07-15 07:03:22.792784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.412 [2024-07-15 07:03:22.793034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.412 [2024-07-15 07:03:22.793277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.412 [2024-07-15 07:03:22.793301] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.412 [2024-07-15 07:03:22.793316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.412 [2024-07-15 07:03:22.796901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.412 [2024-07-15 07:03:22.806197] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.412 [2024-07-15 07:03:22.806653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.412 [2024-07-15 07:03:22.806683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.412 [2024-07-15 07:03:22.806700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.412 [2024-07-15 07:03:22.806950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.412 [2024-07-15 07:03:22.807193] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.412 [2024-07-15 07:03:22.807217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.412 [2024-07-15 07:03:22.807232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.412 [2024-07-15 07:03:22.810811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.412 [2024-07-15 07:03:22.820114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.412 [2024-07-15 07:03:22.820592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.412 [2024-07-15 07:03:22.820641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.820659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.820908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.821151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.821175] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.821190] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.824766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.834087] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.834498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.834529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.834547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.834785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.835040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.835065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.835079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.838658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 795779 Killed "${NVMF_APP[@]}" "$@" 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=796733 00:34:35.413 [2024-07-15 07:03:22.847966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 796733 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 796733 ']' 00:34:35.413 [2024-07-15 07:03:22.848408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.848456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.848474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:35.413 [2024-07-15 07:03:22.848713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:35.413 [2024-07-15 07:03:22.848966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.848991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 07:03:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.413 [2024-07-15 07:03:22.849006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.852588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.861897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.862297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.862332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.862350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.862589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.862832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.862856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.862871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.866465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.875773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.876161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.876192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.876210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.876448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.876690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.876714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.876729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.880321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.889630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.890091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.890122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.890140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.890379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.890622] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.890646] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.890661] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.894257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.896060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:35.413 [2024-07-15 07:03:22.896128] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.413 [2024-07-15 07:03:22.903567] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.903978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.904010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.904034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.904274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.904518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.904541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.904556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.908144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.917616] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.918020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.918051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.918069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.918307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.918550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.918574] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.918588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.922174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 [2024-07-15 07:03:22.931468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.931896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.931927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.413 [2024-07-15 07:03:22.931944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.413 [2024-07-15 07:03:22.932182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.413 [2024-07-15 07:03:22.932425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.413 [2024-07-15 07:03:22.932449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.413 [2024-07-15 07:03:22.932464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.413 [2024-07-15 07:03:22.936048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.413 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.413 [2024-07-15 07:03:22.945354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.413 [2024-07-15 07:03:22.945769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.413 [2024-07-15 07:03:22.945799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:22.945817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:22.946065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:22.946314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:22.946339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:22.946354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:22.949937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.414 [2024-07-15 07:03:22.959232] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.414 [2024-07-15 07:03:22.959632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.414 [2024-07-15 07:03:22.959662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:22.959680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:22.959930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:22.960173] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:22.960197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:22.960212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:22.963804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.414 [2024-07-15 07:03:22.972545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:35.414 [2024-07-15 07:03:22.973115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.414 [2024-07-15 07:03:22.973524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.414 [2024-07-15 07:03:22.973555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:22.973573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:22.973811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:22.974064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:22.974089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:22.974104] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:22.977687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.414 [2024-07-15 07:03:22.987031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.414 [2024-07-15 07:03:22.987660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.414 [2024-07-15 07:03:22.987703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:22.987724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:22.987983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:22.988231] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:22.988256] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:22.988273] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:22.991867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.414 [2024-07-15 07:03:23.000974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.414 [2024-07-15 07:03:23.001396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.414 [2024-07-15 07:03:23.001428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:23.001447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:23.001685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:23.001938] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:23.001963] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:23.001978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:23.005555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.414 [2024-07-15 07:03:23.014839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.414 [2024-07-15 07:03:23.015264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.414 [2024-07-15 07:03:23.015295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.414 [2024-07-15 07:03:23.015314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.414 [2024-07-15 07:03:23.015552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.414 [2024-07-15 07:03:23.015796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.414 [2024-07-15 07:03:23.015820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.414 [2024-07-15 07:03:23.015835] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.414 [2024-07-15 07:03:23.019417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.674 [2024-07-15 07:03:23.028727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.674 [2024-07-15 07:03:23.029252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.674 [2024-07-15 07:03:23.029294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.674 [2024-07-15 07:03:23.029316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.674 [2024-07-15 07:03:23.029563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.674 [2024-07-15 07:03:23.029810] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.674 [2024-07-15 07:03:23.029835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.674 [2024-07-15 07:03:23.029852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.033441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.042740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.043233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.043269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.043300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.043542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.043788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.043813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.043830] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.047421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.056718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.057169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.057201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.057218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.057456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.057699] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.057724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.057739] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.061328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.067226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.675 [2024-07-15 07:03:23.067263] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.675 [2024-07-15 07:03:23.067278] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.675 [2024-07-15 07:03:23.067291] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.675 [2024-07-15 07:03:23.067303] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.675 [2024-07-15 07:03:23.067360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.675 [2024-07-15 07:03:23.067419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.675 [2024-07-15 07:03:23.067422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.675 [2024-07-15 07:03:23.070630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.071103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.071136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.071155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.071398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.071643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.071668] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.071685] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.075284] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.084622] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.085229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.085270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.085292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.085541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.085789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.085814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.085832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.089504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.098624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.099240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.099284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.099305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.099553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.099801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.099826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.099845] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.103439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.112537] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.113173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.113215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.113237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.113485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.113733] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.113758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.113775] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.117364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.126461] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.126962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.127000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.127031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.127277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.127523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.127548] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.127565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.131149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.140467] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.141046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.141091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.141112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.141359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.141606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.141631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.141648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.145236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.154539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.154981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.155016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.155035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.155278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.155523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.155547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.155563] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.159147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.675 [2024-07-15 07:03:23.168422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.675 [2024-07-15 07:03:23.168812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.675 [2024-07-15 07:03:23.168840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.675 [2024-07-15 07:03:23.168856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.675 [2024-07-15 07:03:23.169078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.675 [2024-07-15 07:03:23.169305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.675 [2024-07-15 07:03:23.169327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.675 [2024-07-15 07:03:23.169340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.675 [2024-07-15 07:03:23.172606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 [2024-07-15 07:03:23.182041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.182439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.182467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.182483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.182698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.182926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.182948] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.182962] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.186231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 [2024-07-15 07:03:23.195579] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.195957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.195985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.196001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.196215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.196444] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.196465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.196479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.199698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.209222] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.209815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.209843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.209864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.209907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.676 [2024-07-15 07:03:23.210087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.210320] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.210341] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.210354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.213623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.222724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.223153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.223182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.223198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.223427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.223648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.223669] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.223681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.226786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 [2024-07-15 07:03:23.236164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.236567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.236595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.236611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.236825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.237091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.237114] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.237127] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.240348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 [2024-07-15 07:03:23.249714] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.250329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.250374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.250406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.250644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.250861] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.250904] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.250923] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.254098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 Malloc0 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.263405] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.263830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.263859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.263882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.264100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.264330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.264351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.264364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.267621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.676 [2024-07-15 07:03:23.276909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.676 [2024-07-15 07:03:23.277292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.676 [2024-07-15 07:03:23.277319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f01e0 with addr=10.0.0.2, port=4420 00:34:35.676 [2024-07-15 07:03:23.277335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f01e0 is same with the state(5) to be set 00:34:35.676 [2024-07-15 07:03:23.277407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.676 [2024-07-15 07:03:23.277564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f01e0 (9): Bad file descriptor 00:34:35.676 [2024-07-15 07:03:23.277782] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:35.676 [2024-07-15 07:03:23.277803] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:35.676 [2024-07-15 07:03:23.277816] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:35.676 [2024-07-15 07:03:23.281062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.676 07:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 796065 00:34:35.936 [2024-07-15 07:03:23.290513] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:35.936 [2024-07-15 07:03:23.457001] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:45.906 00:34:45.906 Latency(us) 00:34:45.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.906 Verification LBA range: start 0x0 length 0x4000 00:34:45.906 Nvme1n1 : 15.05 6603.72 25.80 8843.04 0.00 8240.33 861.68 45049.93 00:34:45.906 =================================================================================================================== 00:34:45.906 Total : 6603.72 25.80 8843.04 0.00 8240.33 861.68 45049.93 00:34:45.906 07:03:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:45.906 07:03:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:45.907 rmmod nvme_tcp 00:34:45.907 rmmod nvme_fabrics 00:34:45.907 rmmod nvme_keyring 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 796733 ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 796733 ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 796733' 00:34:45.907 killing process with pid 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 796733 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.907 07:03:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.812 07:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.812 00:34:47.812 real 0m22.561s 00:34:47.812 user 1m0.050s 00:34:47.812 sys 0m4.398s 00:34:47.812 07:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:47.812 07:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.812 ************************************ 00:34:47.812 END TEST nvmf_bdevperf 00:34:47.812 ************************************ 00:34:47.812 07:03:35 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:47.812 07:03:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:47.812 07:03:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:47.812 07:03:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.812 ************************************ 00:34:47.812 START TEST nvmf_target_disconnect 00:34:47.812 ************************************ 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:47.812 * Looking for test storage... 00:34:47.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.812 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:47.813 07:03:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:49.726 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:49.726 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:49.726 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.726 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:49.727 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:49.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:34:49.727 00:34:49.727 --- 10.0.0.2 ping statistics --- 00:34:49.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.727 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:34:49.727 00:34:49.727 --- 10.0.0.1 ping statistics --- 00:34:49.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.727 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:49.727 ************************************ 00:34:49.727 START TEST nvmf_target_disconnect_tc1 00:34:49.727 ************************************ 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:49.727 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:49.727 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.985 [2024-07-15 07:03:37.347803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.985 [2024-07-15 07:03:37.347890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb34740 with addr=10.0.0.2, port=4420 00:34:49.985 [2024-07-15 07:03:37.347931] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:49.985 [2024-07-15 07:03:37.347953] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:49.985 [2024-07-15 07:03:37.347968] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:49.985 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:49.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:49.985 Initializing NVMe Controllers 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.985 00:34:49.985 real 0m0.097s 00:34:49.985 user 0m0.039s 00:34:49.985 sys 0m0.057s 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:49.985 ************************************ 00:34:49.985 END TEST nvmf_target_disconnect_tc1 00:34:49.985 ************************************ 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:49.985 ************************************ 00:34:49.985 START TEST nvmf_target_disconnect_tc2 00:34:49.985 ************************************ 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:49.985 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=799879 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 799879 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 799879 ']' 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:49.986 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.986 [2024-07-15 07:03:37.459465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:49.986 [2024-07-15 07:03:37.459551] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.986 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.986 [2024-07-15 07:03:37.531370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:50.244 [2024-07-15 07:03:37.616976] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.244 [2024-07-15 07:03:37.617029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.244 [2024-07-15 07:03:37.617058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.244 [2024-07-15 07:03:37.617069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.244 [2024-07-15 07:03:37.617079] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.244 [2024-07-15 07:03:37.617137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:50.244 [2024-07-15 07:03:37.617257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:50.244 [2024-07-15 07:03:37.617321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:50.244 [2024-07-15 07:03:37.617323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 Malloc0 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 [2024-07-15 07:03:37.778281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 [2024-07-15 07:03:37.806520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=799905 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:50.244 07:03:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:50.503 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.466 07:03:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 799879 00:34:52.466 07:03:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Write completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.466 Read completed with error (sct=0, sc=8) 00:34:52.466 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 [2024-07-15 07:03:39.830331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 [2024-07-15 07:03:39.830652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Read completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 Write completed with error (sct=0, sc=8) 00:34:52.467 starting I/O failed 00:34:52.467 [2024-07-15 07:03:39.830979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:52.467 [2024-07-15 07:03:39.831152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.831191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.831344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.831370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.831509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.831534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.831687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.831712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.831860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.831899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.832852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.832889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.833059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.833085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.833265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.833289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.833463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.833488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.833603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.467 [2024-07-15 07:03:39.833628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.467 qpair failed and we were unable to recover it. 00:34:52.467 [2024-07-15 07:03:39.833779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.833803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.833928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.833954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.834884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.834910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.835856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.835886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.836872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.836903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.837932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.837971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.838932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.838959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.839840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.839874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.840038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.840063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.468 qpair failed and we were unable to recover it. 00:34:52.468 [2024-07-15 07:03:39.840197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.468 [2024-07-15 07:03:39.840222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.840360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.840385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.840496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.840521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.840643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.840668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.840822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.840871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.841924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.841951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.842870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.842911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.843943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.843971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.844081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.844107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.844734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.844758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.844891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.844917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.845873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.845904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.846950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.846975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.847116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.847141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.847325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.847366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.469 qpair failed and we were unable to recover it. 00:34:52.469 [2024-07-15 07:03:39.847530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.469 [2024-07-15 07:03:39.847555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.847694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.847718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.847831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.847855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.847998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.848198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.848376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.848735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.848902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.848929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.849106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.849131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.849257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.849284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.849473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.849517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.849664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.849690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.849811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.849836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.850936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.850962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.851120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.851147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.851316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.851341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.851484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.851510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.851688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.851716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.851852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.851891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.852887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.852913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.853942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.853967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.854129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.854153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.854269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.470 [2024-07-15 07:03:39.854294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.470 qpair failed and we were unable to recover it. 00:34:52.470 [2024-07-15 07:03:39.854405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.854430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.854604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.854632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.854805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.854830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.854990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.855188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.855390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.855558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.855701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.855871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.855910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.856965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.856991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.857137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.857162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.857332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.857362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.857494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.857536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.857686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.857713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.857886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.857912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.858961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.858986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.859166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.859191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.859364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.859389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.859532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.859556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.471 qpair failed and we were unable to recover it. 00:34:52.471 [2024-07-15 07:03:39.859682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.471 [2024-07-15 07:03:39.859707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.859874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.859906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.860938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.860965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.861099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.861142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.861321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.861348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.861626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.861674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.861781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.861807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.861948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.861974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.862174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.862339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.862507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.862671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.862802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.862959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.863155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.863355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.863520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.863665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.863810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.863836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.864058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.864296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.864510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.864682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.864820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.864993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.865036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.865274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.865302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.865540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.865597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.865765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.865789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.865944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.865970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.866086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.866110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.866283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.866307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.866507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.866554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.866725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.866750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.866897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.866923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.867078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.867121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.867318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.472 [2024-07-15 07:03:39.867361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.472 qpair failed and we were unable to recover it. 00:34:52.472 [2024-07-15 07:03:39.867542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.867567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.867678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.867705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.867847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.867873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.868956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.868982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.869091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.869117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.869293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.869318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.869514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.869558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.869714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.869739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.869861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.869896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.870866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.870901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.871069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.871250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.871481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.871651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.871818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.871991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.872193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.872397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.872552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.872746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.872941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.872985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.873155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.873307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.873500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.873669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.873813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.873964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.874007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.874164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.874207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.874352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.874394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.874563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.473 [2024-07-15 07:03:39.874588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.473 qpair failed and we were unable to recover it. 00:34:52.473 [2024-07-15 07:03:39.874727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.874752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.874896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.874921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.875083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.875127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.875297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.875339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.875482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.875507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.875672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.875696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.875839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.875864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.876944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.876972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.877162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.877355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.877519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.877654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.877817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.877979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.878185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.878380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.878536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.878707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.878925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.878969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.879116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.879143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.879352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.879399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.879544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.879569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.879715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.879740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.879883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.879909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.880076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.880118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.880260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.880288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.880491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.880519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.880649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.880674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.880816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.880840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.881047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.881091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.881286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.881328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.881480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.881523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.881670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.881696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.881838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.881864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.882026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.882068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.474 [2024-07-15 07:03:39.882176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.474 [2024-07-15 07:03:39.882200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.474 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.882340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.882365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.882530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.882572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.882715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.882740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.882918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.882944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.883953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.883981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.884130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.884158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.884367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.884396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.884547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.884572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.884714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.884739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.884896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.884925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.885099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.885142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.885278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.885306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.885468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.885492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.885634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.885659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.885799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.885823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.886064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.886218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.886434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.886603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.886797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.886965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.887206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.887412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.887599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.887745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.887935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.887978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.475 [2024-07-15 07:03:39.888151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.475 [2024-07-15 07:03:39.888176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.475 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.888343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.888367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.888520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.888560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.888707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.888731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.888887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.888912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.889082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.889125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.889273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.889315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.889482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.889524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.889693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.889718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.889830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.889855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.890969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.890999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.891126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.891154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.891313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.891341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.891494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.891522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.891727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.891764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.891906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.891932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.892087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.892132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.892274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.892316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.892475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.892517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.892697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.892722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.892893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.892920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.893068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.893093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.893229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.893257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.893509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.893561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.893792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.893842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.893988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.894029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.894343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.894392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.894574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.894602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.894787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.894814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.895020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.895059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.895258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.895303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.895498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.895542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.895705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.895748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.895910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.895938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.896103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.896147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.896286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.896315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.476 qpair failed and we were unable to recover it. 00:34:52.476 [2024-07-15 07:03:39.896520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.476 [2024-07-15 07:03:39.896562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.896684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.896708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.896884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.896911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.897103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.897146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.897309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.897352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.897534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.897583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.897757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.897782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.897935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.897961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.898110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.898155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.898326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.898369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.898538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.898581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.898755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.898783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.898941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.898967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.899115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.899142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.899312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.899339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.899498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.899525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.899653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.899681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.899835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.899874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.900092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.900270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.900516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.900693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.900842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.900999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.901207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.901380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.901543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.901716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.901928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.901958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.902147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.902177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.902301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.902329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.902461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.902488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.902668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.902695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.902856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.902889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.903886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.903912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.904053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.904077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.477 [2024-07-15 07:03:39.904263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.477 [2024-07-15 07:03:39.904290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.477 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.904445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.904473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.904626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.904654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.904818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.904843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.905937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.905963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.906919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.906945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.907933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.907958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.908125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.908164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.908312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.908340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.908525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.908552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.908703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.908731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.908889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.908918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.909048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.909074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.909221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.909264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.909430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.909470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.909628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.909655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.909811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.909839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.910864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.910912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.911059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.911084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.911237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.911262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.911421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.911448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.478 qpair failed and we were unable to recover it. 00:34:52.478 [2024-07-15 07:03:39.911574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.478 [2024-07-15 07:03:39.911601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.911728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.911756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.911948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.911973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.912139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.912181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.912343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.912368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.912512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.912554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.912732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.912759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.912945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.912971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.913916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.913942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.914055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.914080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.914273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.914300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.914477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.914505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.914683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.914711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.914842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.914870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.915959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.915985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.916101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.916126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.916264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.916288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.916474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.916502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.916719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.916746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.916867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.916902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.917897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.479 [2024-07-15 07:03:39.917922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.479 qpair failed and we were unable to recover it. 00:34:52.479 [2024-07-15 07:03:39.918030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.918198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.918372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.918589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.918775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.918970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.918995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.919973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.919998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.920144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.920189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.920348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.920373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.920486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.920511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.920679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.920706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.920897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.920922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.921917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.921943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.922916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.922942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.923063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.923104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.923286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.923314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.923470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.923494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.923638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.923681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.923843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.923868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.924928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.480 [2024-07-15 07:03:39.924954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.480 qpair failed and we were unable to recover it. 00:34:52.480 [2024-07-15 07:03:39.925124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.925277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.925436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.925631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.925768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.925923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.925952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.926110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.926135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.926279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.926323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.926469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.926496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.926678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.926703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.926865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.926907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.927088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.927115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.927255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.927281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.927453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.927496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.927644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.927672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.927832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.927857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Read completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 Write completed with error (sct=0, sc=8) 00:34:52.481 starting I/O failed 00:34:52.481 [2024-07-15 07:03:39.928185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.481 [2024-07-15 07:03:39.928369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.928412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.928567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.928595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.928733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.928775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.928969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.928999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.929167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.929194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.929354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.929383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.929501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.929529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.929664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.929688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.929801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.929826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.930014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.930040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.930218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.930242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.930395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.930444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.930623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.930651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.930804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.481 [2024-07-15 07:03:39.930831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.481 qpair failed and we were unable to recover it. 00:34:52.481 [2024-07-15 07:03:39.931025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.931174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.931376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.931523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.931715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.931851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.931881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.932925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.932951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.933892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.933917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.934086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.934110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.934272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.934299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.934461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.934486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.934625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.934650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.934842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.934870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.935063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.935088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.935221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.935254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.935524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.935573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.935732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.935757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.935911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.935940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.936105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.936130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.936251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.936276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.936468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.936496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.936653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.936681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.936835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.936860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.937928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.937953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.938096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.938121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.938264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.482 [2024-07-15 07:03:39.938305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.482 qpair failed and we were unable to recover it. 00:34:52.482 [2024-07-15 07:03:39.938443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.938472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.938611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.938651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.938807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.938835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.939969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.939994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.940106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.940131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.940318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.940346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.940506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.940531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.940646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.940672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.940850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.940884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.941911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.941941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.942104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.942129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.942299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.942350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.942524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.942576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.942801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.942829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.943052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.943077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.943273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.943301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.943460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.943490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.943671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.943699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.943847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.943874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.944934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.944960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.945103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.945265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.945405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.945535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.483 [2024-07-15 07:03:39.945719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.483 qpair failed and we were unable to recover it. 00:34:52.483 [2024-07-15 07:03:39.945889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.945917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.946917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.946942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.947936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.947964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.948936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.948966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.949156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.949181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.949366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.949393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.949548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.949575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.949746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.949771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.949913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.949955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.950916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.950942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.951079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.951104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.951250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.951275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.951428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.951455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.951643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.951667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.951836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.951861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.484 [2024-07-15 07:03:39.952010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.484 [2024-07-15 07:03:39.952038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.484 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.952155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.952182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.952334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.952359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.952474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.952499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.952694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.952722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.952893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.952937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.953944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.953969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.954106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.954147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.954290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.954318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.954505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.954530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.954687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.954714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.954872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.954905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.955069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.955094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.955254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.955282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.955473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.955501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.955645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.955670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.955823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.955851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.956836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.956996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.957163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.957330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.957520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.957734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.957901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.957929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.958949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.958991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.959123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.959151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.485 [2024-07-15 07:03:39.959303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.485 [2024-07-15 07:03:39.959328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.485 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.959471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.959513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.959644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.959672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.959889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.959930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.960095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.960262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.960477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.960690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.960845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.960997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.961802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.961990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.962972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.962997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.963139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.963167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.963322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.963351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.963489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.963514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.963622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.963647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.963810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.963837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.964936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.964968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.965135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.965159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.965273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.965298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.965480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.965505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.965679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.965704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.965848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.965873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.966023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.966048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.966162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.966186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.966328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.966369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.486 [2024-07-15 07:03:39.966537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.486 [2024-07-15 07:03:39.966565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.486 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.966752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.966777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.966938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.966967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.967097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.967126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.967274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.967299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.967449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.967490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.967645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.967674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.967855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.967890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.968943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.968969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.969105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.969148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.969294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.969322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.969509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.969534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.969702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.969729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.969873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.969906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.970914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.970943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.971105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.971130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.971252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.971294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.971413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.971441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.971621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.971646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.971791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.971818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.972852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.972886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.973935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.487 [2024-07-15 07:03:39.973963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.487 qpair failed and we were unable to recover it. 00:34:52.487 [2024-07-15 07:03:39.974120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.974148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.974284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.974309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.974450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.974476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.974601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.974628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.974815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.974840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.975901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.975927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.976090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.976272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.976460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.976637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.976816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.976982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.977147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.977343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.977540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.977681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.977872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.977905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.978066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.978213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.978388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.978571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.978784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.978979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.979150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.979316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.979504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.979739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.979910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.979939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.980094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.980122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.980274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.980299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.488 [2024-07-15 07:03:39.980484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.488 [2024-07-15 07:03:39.980512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.488 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.980638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.980666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.980790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.980815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.980921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.980947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.981098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.981125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.981316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.981341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.981471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.981499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.981622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.981650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.981811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.981837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.982034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.982067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.982252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.982280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.982470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.982495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.982631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.982659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.982854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.982884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.983908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.983937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.984099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.984124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.984261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.984302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.984469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.984494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.984636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.984661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.984815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.984843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.985872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.985921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.986106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.986295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.986507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.986656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.986819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.986985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.987166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.987357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.987517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.987709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.489 [2024-07-15 07:03:39.987930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.489 [2024-07-15 07:03:39.987956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.489 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.988172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.988312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.988472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.988678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.988882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.988996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.989158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.989343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.989542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.989726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.989909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.989937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.990069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.990094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.990218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.990243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.990402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.990430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.990611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.990635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.990798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.990825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.991905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.991948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.992955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.992981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.993135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.993162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.993295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.993320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.993442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.993484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.993655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.993682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.993841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.993866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.994063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.994223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.994445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.994666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.994829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.994975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.995000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.995143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.995186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.490 [2024-07-15 07:03:39.995307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.490 [2024-07-15 07:03:39.995335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.490 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.995523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.995547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.995710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.995737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.995871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.995913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.996076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.996236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.996464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.996654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.996844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.996993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.997152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.997319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.997508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.997694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.997869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.997902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.998863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.998912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.999132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.999306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.999492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.999687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:39.999826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:39.999975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.000137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.000323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.000500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.000727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.000914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.000943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.001093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.001117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.001233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.001258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.001428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.001468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.001630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.001669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.001929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.001972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.002114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.002140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.002347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.002374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.002527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.002554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.002709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.002734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.491 [2024-07-15 07:03:40.002881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.491 [2024-07-15 07:03:40.002906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.491 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.003939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.003968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.004170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.004195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.004330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.004362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.004487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.004514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.004689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.004716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.004887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.004912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.005956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.005982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.006101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.006142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.006333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.006358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.006515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.006543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.006668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.006695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.006855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.006892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.007892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.007935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.008063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.008091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.008278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.008306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.008470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.008495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.008658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.008685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.008846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.008874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.009001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.009028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.009191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.009216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.492 qpair failed and we were unable to recover it. 00:34:52.492 [2024-07-15 07:03:40.009365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.492 [2024-07-15 07:03:40.009390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.009506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.009532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.009667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.009695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.009867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.009898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.010945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.010971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.011955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.011980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.012098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.012123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.012272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.012296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.012491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.012626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.012651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.012778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.012802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.013883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.013912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.014890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.014932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.015079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.015107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.015270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.015298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.015427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.015452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.015600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.015625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.015824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.015852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.016002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.016028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.016198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.016223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.016388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.016421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.493 qpair failed and we were unable to recover it. 00:34:52.493 [2024-07-15 07:03:40.016604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.493 [2024-07-15 07:03:40.016632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.016753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.016781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.016936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.016965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.017157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.017314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.017500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.017693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.017821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.017988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.018943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.018968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.019086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.019111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.019249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.019276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.020337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.020365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.020529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.020557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.020700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.020727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.020867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.020899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.021041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.021083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.021258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.021303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.021535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.021570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.021697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.021724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.021866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.021927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.022091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.022269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.022464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.022631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.022819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.022982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.023141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.023345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.023528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.023701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.023910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.023935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.024055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.024080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.024226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.024253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.024415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.024440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.024633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.024661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.494 qpair failed and we were unable to recover it. 00:34:52.494 [2024-07-15 07:03:40.024813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.494 [2024-07-15 07:03:40.024840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.024998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.025178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.025341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.025579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.025736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.025903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.025929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.026110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.026296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.026449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.026636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.026799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.026985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.027842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.027992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.028185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.028369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.028543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.028707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.028890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.028916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.029118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.029146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.029302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.029330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.029519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.029545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.029698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.029723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.029866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.029913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.030075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.030103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.030238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.030263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.030484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.030540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.030672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.030700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.030866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.030895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.031929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.031970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.032112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.032139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.495 [2024-07-15 07:03:40.032303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.495 [2024-07-15 07:03:40.032330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.495 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.032503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.032527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.032652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.032677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.032846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.032873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.033050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.033077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.033586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.033617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.033753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.033782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.033934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.033960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.034946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.034972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.035091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.035116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.035265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.035290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.035423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.035451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.035940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.035972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.036181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.036308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.036336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.036464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.036492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.036935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.036966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.037155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.037334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.037489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.037654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.037840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.037989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.038175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.038377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.038521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.038683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.038850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.038893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.039896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.039922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.040068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.040101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.040274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.040300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.496 [2024-07-15 07:03:40.040443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.496 [2024-07-15 07:03:40.040469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.496 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.040595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.040638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.040808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.040834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.040958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.040984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.041937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.041979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.042142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.042171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.042330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.042358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.042504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.042529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.042662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.042687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.042825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.042853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.043096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.043276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.043480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.043677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.043829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.043985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.044132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.044367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.044549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.044721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.044895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.044925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.045902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.045945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.046052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.046077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.046207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.046232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.046399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.046427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.046558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.497 [2024-07-15 07:03:40.046586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.497 qpair failed and we were unable to recover it. 00:34:52.497 [2024-07-15 07:03:40.046728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.046753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.046931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.046957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.047852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.047886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.048963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.048990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.049103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.049129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.049292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.049321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.049486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.049516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.049660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.049688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.049841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.049869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.050885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.050913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.051077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.051102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.051269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.051297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.051479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.051516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.051672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.051699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.051850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.051881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.052917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.052961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.053101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.053260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.053455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.053611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.053799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.498 qpair failed and we were unable to recover it. 00:34:52.498 [2024-07-15 07:03:40.053974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.498 [2024-07-15 07:03:40.054000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.499 [2024-07-15 07:03:40.054118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.499 [2024-07-15 07:03:40.054143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.499 [2024-07-15 07:03:40.054344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.499 [2024-07-15 07:03:40.054371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.499 [2024-07-15 07:03:40.054540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.499 [2024-07-15 07:03:40.054584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.499 [2024-07-15 07:03:40.054727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.499 [2024-07-15 07:03:40.054753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.499 [2024-07-15 07:03:40.054918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.499 [2024-07-15 07:03:40.054944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.499 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.055082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.055108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.055273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.055301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.055451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.055476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.055607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.055632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.055822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.055850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.056854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.056888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.057948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.057974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.058881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.058997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.059140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.059318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.059514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.059682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.059846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.059873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.060020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.060045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.060189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.060214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.060333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.060358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.060497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.060526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.783 [2024-07-15 07:03:40.060660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.783 [2024-07-15 07:03:40.060688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.783 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.060887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.060912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.061083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.061272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.061463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.061652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.061813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.061980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.062945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.062970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.063948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.063976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.064110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.064137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.064293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.064318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.064451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.064492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.064641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.064668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.064853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.064883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.065836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.065861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.066812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.066839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.067015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.067040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.067178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.067204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.067354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.067485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.067512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.784 qpair failed and we were unable to recover it. 00:34:52.784 [2024-07-15 07:03:40.067693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.784 [2024-07-15 07:03:40.067718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.067828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.067870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.068887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.068912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.069868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.069899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.070825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.070852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.071930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.071957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.072892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.072919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.073880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.073906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.074064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.074091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.074268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.074292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.074429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.074454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.785 qpair failed and we were unable to recover it. 00:34:52.785 [2024-07-15 07:03:40.074595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.785 [2024-07-15 07:03:40.074637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.074782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.074809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.074984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.075149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.075362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.075507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.075674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.075836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.075861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.076864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.076898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.077894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.077920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.078964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.078993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.079149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.079176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.079313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.079339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.079475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.079517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.079651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.079679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.079803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.079831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.080850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.080999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.081027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.081209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.081237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.081375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.081401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.081516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.081541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.786 qpair failed and we were unable to recover it. 00:34:52.786 [2024-07-15 07:03:40.081717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.786 [2024-07-15 07:03:40.081745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.081871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.081906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.082891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.082916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.083941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.083969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.084146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.084171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.084328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.084356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.084488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.084515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.084685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.084713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.084847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.084873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.085901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.085930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.086090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.086118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.086246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.086271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.086458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.086518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.086677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.086704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.086838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.086866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.087009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.087038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.087175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.087200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.087346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.087374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.787 [2024-07-15 07:03:40.087536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.787 [2024-07-15 07:03:40.087564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.787 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.087731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.087756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.087927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.087955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.088896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.088943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.089137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.089162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.089328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.089355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.089513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.089541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.089714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.089742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.089906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.089932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.090957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.090983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.091883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.091912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.092107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.092272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.092454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.092640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.092819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.092974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.093188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.093376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.093513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.093708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.093897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.093925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.094088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.094113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.094241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.094287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.094441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.094469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.094619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.094647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.788 [2024-07-15 07:03:40.094803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.788 [2024-07-15 07:03:40.094827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.788 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.094953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.094996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.095156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.095183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.095309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.095338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.095485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.095510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.095664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.095689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.095803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.095828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.096892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.096917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.097038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.097079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.097198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.097225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.097454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.097482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.097645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.097670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.097809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.097852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.098869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.098901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.099870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.099900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.100922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.100950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.101113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.101253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.101407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.101559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.101814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.789 [2024-07-15 07:03:40.101998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.789 [2024-07-15 07:03:40.102024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.789 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.102175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.102202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.102362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.102389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.102532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.102557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.102670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.102695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.102853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.102897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.103957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.103983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.104940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.104968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.105952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.105978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.106126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.106167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.106396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.106424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.106580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.106607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.106776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.106802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.106933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.106975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.107954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.107982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.108147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.108171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.108332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.108360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.108474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.108502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.108646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.108670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.108837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.108862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.109001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.109028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.109182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.790 [2024-07-15 07:03:40.109210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.790 qpair failed and we were unable to recover it. 00:34:52.790 [2024-07-15 07:03:40.109367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.109395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.109531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.109556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.109668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.109693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.109852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.109886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.110862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.110902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.111048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.111090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.111229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.111257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.111387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.111415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.111885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.111917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.112107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.112133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.112276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.112302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.112505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.112533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.112692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.112720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.113130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.113316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.113481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.113663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.113805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.113996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.114137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.114345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.114543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.114720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.114895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.114931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.115118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.115298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.115525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.115663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.791 [2024-07-15 07:03:40.115836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.791 qpair failed and we were unable to recover it. 00:34:52.791 [2024-07-15 07:03:40.115959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.115985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.116957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.116986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.117144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.117169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.117308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.117350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.117534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.117561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.117692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.117720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.117855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.117884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.118874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.118907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.119063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.119101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.119269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.119294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.119439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.119465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.119600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.119634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.119804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.119829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.120051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.120077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.120267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.120316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.120505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.120532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.120662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.120690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.120825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.120850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.121843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.121868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.122942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.122968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.123087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.792 [2024-07-15 07:03:40.123112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.792 qpair failed and we were unable to recover it. 00:34:52.792 [2024-07-15 07:03:40.123260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.123285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.123427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.123455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.123619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.123645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.123761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.123786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.123935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.123960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.124122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.124315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.124531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.124695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.124845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.124986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.125167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.125366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.125551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.125762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.125958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.125988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.126109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.126134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.126266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.126294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.126437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.126463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.126605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.126631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.126794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.126822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.127973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.127999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.128141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.128184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.128316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.128344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.128502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.128530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.128693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.128719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.128833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.128858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.129971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.129999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.130141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.130165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.130298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.130344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.130499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.793 [2024-07-15 07:03:40.130527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.793 qpair failed and we were unable to recover it. 00:34:52.793 [2024-07-15 07:03:40.130694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.130719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.130838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.130863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.130999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.131236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.131381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.131539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.131683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.131873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.131906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.132927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.132952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.133140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.133293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.133461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.133622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.133809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.133989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.134831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.134985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.135194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.135365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.135550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.135715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.135885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.135911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.136928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.136957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.137118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.137143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.137343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.137371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.137501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.137529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.137659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.794 [2024-07-15 07:03:40.137686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.794 qpair failed and we were unable to recover it. 00:34:52.794 [2024-07-15 07:03:40.137910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.137937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.138934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.138962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.139088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.139116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.139317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.139342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.139449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.139492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.139614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.139641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.139795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.139833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.140811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.140979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.141133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.141317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.141474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.141697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.141857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.141899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.142079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.142104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.142248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.142278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.142442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.142471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.142627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.142655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.142792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.142823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.143951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.143980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.144942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.795 [2024-07-15 07:03:40.144969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.795 qpair failed and we were unable to recover it. 00:34:52.795 [2024-07-15 07:03:40.145082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.145107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.145272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.145299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.145450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.145479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.145645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.145670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.145832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.145860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.146905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.146934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.147915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.147940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.148956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.148985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.149123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.149148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.149298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.149323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.149490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.149519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.149637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.149665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.149830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.149855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.150944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.150987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.151161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.151189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.796 qpair failed and we were unable to recover it. 00:34:52.796 [2024-07-15 07:03:40.151348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.796 [2024-07-15 07:03:40.151373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.151558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.151586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.151768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.151796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.151961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.151988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.152132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.152157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.152282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.152313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.152493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.152518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.152667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.152692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.152887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.152913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.153851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.153897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.154037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.154065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.154230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.154255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.154457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.154484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.154642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.154676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.154835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.154863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.155059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.155275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.155437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.155617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.155805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.155982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.156851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.156992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.157190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.157400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.157588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.157771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.157967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.157993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.158133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.158172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.158316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.158345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.158466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.158494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.158651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.158675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.797 [2024-07-15 07:03:40.158822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.797 [2024-07-15 07:03:40.158847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.797 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.158998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.159175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.159352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.159498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.159723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.159874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.159914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.160099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.160236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.160434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.160612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.160800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.160975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.161123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.161286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.161456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.161645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.161860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.161900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.162955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.162981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.163123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.163147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.163359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.163483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.163510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.163696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.163720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.163921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.164051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.164079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.164245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.164273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.164469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.164494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.164645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.164673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.164818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.164850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.165974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.798 [2024-07-15 07:03:40.165999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.798 qpair failed and we were unable to recover it. 00:34:52.798 [2024-07-15 07:03:40.166167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.166193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.166315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.166340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.166474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.166499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.166684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.166717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.166883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.166910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.167918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.167944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.168945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.168973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.169094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.169122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.169298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.169323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.169439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.169464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.169635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.169663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.169818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.169846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.170028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.170054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.170178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.170219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.170381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.170406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.170573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.170599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.170817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.170842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.171946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.171972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.172119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.172150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.172324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.172348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.172504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.172532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.172694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.172719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.172855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.172897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.173041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.173066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.173208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.173233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.173367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.173395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.799 qpair failed and we were unable to recover it. 00:34:52.799 [2024-07-15 07:03:40.173551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.799 [2024-07-15 07:03:40.173579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.173745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.173770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.173902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.173943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.174948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.174977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.175119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.175145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.175318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.175350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.175520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.175547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.175699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.175727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.175863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.175896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.176048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.176091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.176278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.176305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.176462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.176489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.176649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.176674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.176787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.176814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.177952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.177981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.178127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.178152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.178272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.178297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.178429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.178456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.178629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.178657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.178860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.178889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.179081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.179297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.179481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.179681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.179821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.800 [2024-07-15 07:03:40.179999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.800 [2024-07-15 07:03:40.180027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.800 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.180177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.180214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.180352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.180377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.180517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.180559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.180708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.180736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.180858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.180899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.181907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.181937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.182104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.182129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.182254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.182279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.182442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.182469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.182620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.182648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.182770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.182812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.183901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.183945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.184114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.184139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.184340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.184365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.184529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.184557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.184716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.184751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.184883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.184926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.185039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.185064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.185207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.185248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.185431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.185459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.185595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.185623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.185847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.185874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.186070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.186266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.186468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.186682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.186829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.186985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.187011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.187199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.187227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.187397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.801 [2024-07-15 07:03:40.187422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.801 qpair failed and we were unable to recover it. 00:34:52.801 [2024-07-15 07:03:40.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.187613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.187789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.187817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.188822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.188852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.189856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.189901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.190960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.190986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.191106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.191131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.191266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.191291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.191475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.191503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.191717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.191744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.191952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.191977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.192149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.192192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.192383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.192410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.192629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.192686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.192870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.192914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.193078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.193102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.193243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.193271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.193433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.193458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.193664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.193714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.193842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.193870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.194946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.194972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.195092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.802 [2024-07-15 07:03:40.195117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.802 qpair failed and we were unable to recover it. 00:34:52.802 [2024-07-15 07:03:40.195285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.195327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.195512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.195540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.195692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.195733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.195931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.195956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.196104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.196129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.196300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.196328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.196503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.196531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.196690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.196717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.196846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.196883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.197911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.197937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.198054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.198079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.198215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.198243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.198414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.198441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.198661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.198689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.198871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.198930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.199077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.199102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.199259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.199286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.199512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.199561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.199734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.199762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.199946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.199975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.200113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.200138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.200316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.200341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.200512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.200540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.200719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.200746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.200922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.200948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.201114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.201337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.201497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.201651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.201859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.201993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.202163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.202338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.202522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.202699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.202853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.803 [2024-07-15 07:03:40.202887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.803 qpair failed and we were unable to recover it. 00:34:52.803 [2024-07-15 07:03:40.203040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.203195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.203379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.203594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.203776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.203936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.203962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.204106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.204132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.204281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.204308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.204467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.204495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.204653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.204682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.204869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.204914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.205105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.205130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.205298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.205326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.205505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.205555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.205713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.205740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.205895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.205920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.206100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.206266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.206484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.206670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.206861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.206995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.207162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.207327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.207491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.207665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.207874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.207904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.208971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.208996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.209117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.209143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.209315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.209342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.209488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.804 [2024-07-15 07:03:40.209515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.804 qpair failed and we were unable to recover it. 00:34:52.804 [2024-07-15 07:03:40.209671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.209698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.209853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.209887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.210930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.210958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.211116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.211144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.211315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.211340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.211486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.211511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.211701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.211729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.211888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.211917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.212885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.212910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.213085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.213248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.213432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.213660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.213842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.213998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.214165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.214354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.214549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.214722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.214891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.214917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.215915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.215944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.216098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.216126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.216298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.216323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.216468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.216494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.216646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.216680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.216819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.805 [2024-07-15 07:03:40.216846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.805 qpair failed and we were unable to recover it. 00:34:52.805 [2024-07-15 07:03:40.217027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.217212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.217416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.217568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.217775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.217966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.217992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.218137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.218178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.218343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.218368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.218535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.218560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.218714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.218741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.218906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.218932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.219074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.219099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.219250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.219275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.219416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.219457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.219639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.219666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.219815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.219843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.220013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.220039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.220239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.220288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.220466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.220494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.220673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.220701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.220855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.220885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.221075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.221103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.221227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.221255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.221435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.221639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.221664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.221862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.221892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.222029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.222213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.222380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.222624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.222803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.222983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.223179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.223399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.223630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.223768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.223952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.223977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.224136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.224163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.224286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.224313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.224471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.224498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.224661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.806 [2024-07-15 07:03:40.224687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.806 qpair failed and we were unable to recover it. 00:34:52.806 [2024-07-15 07:03:40.224899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.224928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.225090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.225118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.225272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.225299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.225490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.225515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.225648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.225676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.225835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.225862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.226013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.226041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.226240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.226265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.226432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.226460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.226612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.226639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.226817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.226857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.227898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.227926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.228897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.228939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.229108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.229262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.229438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.229629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.229810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.229973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.230155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.230347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.230505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.230711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.230885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.230914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.231072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.231099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.231261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.231285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.231431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.231472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.231606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.231634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.231786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.231814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.232002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.232028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.232135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.232177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.807 qpair failed and we were unable to recover it. 00:34:52.807 [2024-07-15 07:03:40.232339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.807 [2024-07-15 07:03:40.232367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.232526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.232554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.232748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.232773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.232938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.232967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.233945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.233973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.234137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.234285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.234480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.234663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.234819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.234975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.235146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.235338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.235529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.235713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.235909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.235937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.236093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.236120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.236262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.236287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.236455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.236480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.236657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.236685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.236849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.236882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.237967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.237993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.238157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.238185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.238306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.238333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.238474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.238499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.238668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.238693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.238830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.238858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.239046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.239074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.239233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.239258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.239368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.808 [2024-07-15 07:03:40.239393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.808 qpair failed and we were unable to recover it. 00:34:52.808 [2024-07-15 07:03:40.239563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.239591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.239715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.239747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.239947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.239973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.240097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.240124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.240271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.240298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.240447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.240475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.240660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.240684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.240834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.240861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.241885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.241913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.242052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.242077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.242214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.242239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.242411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.242438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.242596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.242624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.242804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.242831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.243869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.243918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.244900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.244926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.245040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.245082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.245245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.245273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.245450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.245478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.245662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.245687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.245848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.245882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.246017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.809 [2024-07-15 07:03:40.246044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.809 qpair failed and we were unable to recover it. 00:34:52.809 [2024-07-15 07:03:40.246175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.246202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.246363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.246388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.246568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.246595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.246752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.246779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.246909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.246937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.247926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.247969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.248133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.248295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.248456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.248615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.248803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.248984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.249201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.249368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.249537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.249756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.249955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.249980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.250124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.250149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.250303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.250331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.250515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.250543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.250684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.250711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.250853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.250882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.251069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.251276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.251454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.251607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.251795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.251986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.252955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.252981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.253118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.253143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.253339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.810 [2024-07-15 07:03:40.253366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.810 qpair failed and we were unable to recover it. 00:34:52.810 [2024-07-15 07:03:40.253532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.253557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.253725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.253750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.253897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.253926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.254943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.254969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.255081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.255105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.255291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.255319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.255475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.255502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.255696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.255721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.255847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.255874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.256970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.256999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.257161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.257186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.257323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.257364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.257519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.257547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.257725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.257753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.257919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.257955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.258117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.258145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.258316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.258344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.258494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.258522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.258687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.258712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.258900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.258928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.259107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.259259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.259453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.259627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.259812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.259992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.260170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.260312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.260483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.260639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.260856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.811 [2024-07-15 07:03:40.260890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.811 qpair failed and we were unable to recover it. 00:34:52.811 [2024-07-15 07:03:40.261048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.261103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.261241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.261268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.261450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.261478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.261614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.261639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.261780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.261809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.261980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.262175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.262337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.262481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.262678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.262857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.262890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.263077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.263102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.263250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.263303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.263457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.263485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.263671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.263695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.263838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.263863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.264948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.264977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.265134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.265159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.265309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.265334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.265472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.265497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.265619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.265645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.265840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.265868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.266065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.266090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.266251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.266301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.266449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.266477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.266608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.266636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.266818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.266843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.267959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.267988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.268152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.268177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.268353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.268377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.268523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.812 [2024-07-15 07:03:40.268576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.812 qpair failed and we were unable to recover it. 00:34:52.812 [2024-07-15 07:03:40.268729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.268757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.268906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.268938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.269095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.269305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.269515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.269668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.269846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.269967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.270151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.270321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.270517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.270683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.270885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.270911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.271942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.271969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.272110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.272151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.272275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.272303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.272482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.272509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.272676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.272701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.272843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.272868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.273912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.273941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.274070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.274095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.274214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.274243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.274409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.274451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.274614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.274639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.274807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.274831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.813 [2024-07-15 07:03:40.275010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.813 [2024-07-15 07:03:40.275039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.813 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.275200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.275227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.275385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.275413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.275545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.275571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.275739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.275780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.275974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.276112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.276286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.276469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.276642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.276850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.276882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.277868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.277898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.278936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.278964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.279129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.279159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.279314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.279342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.279507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.279534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.279724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.279752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.279912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.279937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.280944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.280973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.281132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.281160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.281356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.281381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.281538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.281566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.281728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.281755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.281888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.281917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.282062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.282087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.282226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.814 [2024-07-15 07:03:40.282251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.814 qpair failed and we were unable to recover it. 00:34:52.814 [2024-07-15 07:03:40.282390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.282417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.282573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.282601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.282762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.282787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.282908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.282934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.283096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.283124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.283281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.283314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.283514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.283539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.283704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.283734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.283856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.283894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.284890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.284918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.285939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.285969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.286140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.286169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.286323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.286348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.286470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.286513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.286676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.286703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.286887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.286915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.287938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.287980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.288116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.288145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.288273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.288303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.288466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.288491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.288652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.288679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.288837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.288865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.289051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.289079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.289270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.289295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.289455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.289483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.289635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.289662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.815 qpair failed and we were unable to recover it. 00:34:52.815 [2024-07-15 07:03:40.289812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.815 [2024-07-15 07:03:40.289840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.289992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.290154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.290381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.290558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.290737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.290899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.290943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.291093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.291280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.291474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.291607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.291805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.291978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.292135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.292299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.292539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.292705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.292866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.292900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.293851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.293884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.294945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.294971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.295134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.295161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.295345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.295372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.295543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.295568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.295690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.295715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.295906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.295935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.296097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.296124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.296309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.296342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.296481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.296506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.296645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.296686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.296834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.296862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.297058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.297086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.816 [2024-07-15 07:03:40.297246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.816 [2024-07-15 07:03:40.297271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.816 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.297396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.297435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.297627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.297652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.297774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.297800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.297923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.297949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.298116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.298141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.298289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.298318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.298473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.298501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.298656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.298682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.298872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.298907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.299127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.299295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.299463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.299672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.299848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.299999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.300170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.300308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.300444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.300671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.300835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.300859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.301924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.301954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.302160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.302322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.302490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.302657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.302825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.302990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.303015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.303177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.303205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.303328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.303355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.817 [2024-07-15 07:03:40.303482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.817 [2024-07-15 07:03:40.303510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.817 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.303701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.303726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.303854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.303888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.304968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.304996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.305174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.305200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.305389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.305436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.305559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.305587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.305772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.305800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.305933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.305959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.306126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.306151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.306320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.306348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.306506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.306534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.306685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.306710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.306833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.306883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.307869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.307922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.308100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.308125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.308289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.308316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.308473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.308500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.308685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.308713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.308893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.308919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.309973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.309998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.310160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.310188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.310349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.310375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.310516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.310557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.310749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.310777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.310902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.310939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.818 [2024-07-15 07:03:40.311105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.818 [2024-07-15 07:03:40.311130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.818 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.311328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.311356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.311537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.311564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.311709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.311737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.311868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.311901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.312080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.312121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.312305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.312333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.312519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.312546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.312707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.312732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.312869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.312919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.313089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.313117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.313302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.313329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.313482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.313507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.313670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.313697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.313852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.313903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.314052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.314245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.314440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.314621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.314809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.314978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.315125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.315325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.315486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.315676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.315843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.315892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.316049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.316077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.316262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.316289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.316456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.316480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.316617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.316659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.316819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.316847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.317913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.317939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.318127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.318155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.318288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.318315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.318478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.318503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.318621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.318646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.819 [2024-07-15 07:03:40.318788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.819 [2024-07-15 07:03:40.318834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.819 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.319846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.319873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.320050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.320075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.320219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.320260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.320406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.320433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.320615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.320643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.320806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.320830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.321941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.321970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.322118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.322145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.322307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.322332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.322469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.322510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.322685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.322831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.322858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.323901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.323940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.324091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.324115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.324257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.324282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.324455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.324480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.324666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.324693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.324851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.324894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.325053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.325219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.325397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.325580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.325788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.325986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.326012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.326235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.326263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.820 [2024-07-15 07:03:40.326455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.820 [2024-07-15 07:03:40.326483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.820 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.326614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.326641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.326808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.326833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.327959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.327987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.328147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.328176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.328312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.328337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.328449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.328475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.328656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.328684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.328840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.328868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.329875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.329911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.330076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.330303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.330468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.330626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.330813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.330982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.331167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.331335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.331532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.331699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.331856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.331889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.332936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.821 [2024-07-15 07:03:40.332965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.821 qpair failed and we were unable to recover it. 00:34:52.821 [2024-07-15 07:03:40.333114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.333143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.333301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.333327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.333513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.333542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.333676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.333705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.333844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.333875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.334868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.334920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.335122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.335264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.335463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.335653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.335843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.335987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.336154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.336307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.336470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.336680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.336874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.336907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.337935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.337965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.338123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.338153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.338315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.338341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.338468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.338508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.338705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.338734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.338891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.338935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.339867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.339918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.340044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.822 [2024-07-15 07:03:40.340073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.822 qpair failed and we were unable to recover it. 00:34:52.822 [2024-07-15 07:03:40.340212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.340241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.340375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.340402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.340522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.340548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.340718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.340748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.340933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.340968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.341134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.341160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.341301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.341344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.341504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.341533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.341669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.341699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.341841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.341868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.342851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.342891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.343951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.343978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.344141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.344297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.344480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.344676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.344856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.344994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.345155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.345326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.345498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.345639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.345820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.345847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.346948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.346976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.347119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.347146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.347295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.823 [2024-07-15 07:03:40.347324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.823 qpair failed and we were unable to recover it. 00:34:52.823 [2024-07-15 07:03:40.347483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.347510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.347655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.347681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.347829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.347855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.348869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.348996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.349166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.349350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.349546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.349688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.349857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.349890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.350912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.350940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.351830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.351860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.352003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.352030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.352179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.824 [2024-07-15 07:03:40.352206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.824 qpair failed and we were unable to recover it. 00:34:52.824 [2024-07-15 07:03:40.352396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.352425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.352582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.352611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.352765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.352798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.352960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.352987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.353146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.353175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.353302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.353331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.353488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.353517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.353693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.353720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.353832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.353873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.354066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.354244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.354436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.354619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.354820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.354981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.355202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.355391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.355579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.355763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.355960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.355987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.356100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.356127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.356269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.356295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.356434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.356463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.356662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.356689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.356860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.356907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.357068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.357269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.357485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.357667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.357836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.357984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.358176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.358364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.358528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.358716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.358884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.358910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.359067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.359235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.359391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.359606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.825 [2024-07-15 07:03:40.359786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.825 qpair failed and we were unable to recover it. 00:34:52.825 [2024-07-15 07:03:40.359937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.359966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.360095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.360124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.360290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.360317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.360508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.360537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.360691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.360721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.360887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.360914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.361933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.361977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.362144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.362170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.362315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.362341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.362535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.362561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.362717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.362746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.362913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.362944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.363145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.363171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.363313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.363339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.363555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.363610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.363742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.363771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.363941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.363968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.364114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.364141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.364340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.364390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.364513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.364543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.364698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.364726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.364888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.364933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.365099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.365126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.365272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.365301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.365464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.365494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.365664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.365690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.365809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.365852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.366035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.366078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.366259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.366291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.366462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.366489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.366679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.366709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.366867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.366905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.367046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.367074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.367200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.367227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.367429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.367459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.826 [2024-07-15 07:03:40.367609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.826 [2024-07-15 07:03:40.367639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.826 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.367771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.367801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.367957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.368800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.368977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.369123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.369310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.369459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.369682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.369862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.369919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.370905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.370934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.371081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.371108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.371300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.371330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.371477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.371503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.371660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.371704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.371836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.371865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.372007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.372037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.372174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.372201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.372352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.372379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.372489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.372515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:52.827 [2024-07-15 07:03:40.372656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:52.827 [2024-07-15 07:03:40.372687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:52.827 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.372841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.372868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.373099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.373279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.373462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.373624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.373814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.373969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.374186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.374359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.374550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.374706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.374913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.374944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.375106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.375294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.375440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.375639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.375815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.375981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.376132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.376358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.376526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.376667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.376838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.376868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.377913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.377940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.378935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.378966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.112 [2024-07-15 07:03:40.379095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.112 [2024-07-15 07:03:40.379125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.112 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.379303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.379330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.379458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.379485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.379607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.379634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.379769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.379804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.379943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.379971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.380092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.380118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.380265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.380295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.380451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.380482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.380642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.380670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.380836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.380866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.381074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.381254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.381443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.381633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.381821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.381994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.382169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.382376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.382589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.382744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.382934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.382965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.383150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.383177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.383348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.383379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.383535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.383564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.383733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.383762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.383925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.383953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.384070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.384097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.384274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.384300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.384449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.384492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.384651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.384678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.384852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.384904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.385061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.385091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.385242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.385272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.385438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.385464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.385649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.385679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.385861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.385898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.386052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.386083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.386255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.386282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.386472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.386502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.386659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.113 [2024-07-15 07:03:40.386689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.113 qpair failed and we were unable to recover it. 00:34:53.113 [2024-07-15 07:03:40.386889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.386917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.387067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.387267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.387474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.387660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.387834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.387973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.388173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.388360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.388581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.388743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.388897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.388928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.389049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.389079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.389246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.389274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.389456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.389485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.389642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.389673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.389804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.389834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.390911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.390941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.391107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.391137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.391295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.391324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.391486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.391512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.391698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.391726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.391847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.391876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.392937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.392964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.393080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.393108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.393281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.393310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.393505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.393532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.393707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.393734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.393902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.393933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.394091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.394120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.114 qpair failed and we were unable to recover it. 00:34:53.114 [2024-07-15 07:03:40.394284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.114 [2024-07-15 07:03:40.394310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.394452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.394479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.394598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.394624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.394742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.394773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.394891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.394920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.395874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.395935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.396085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.396114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.396276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.396305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.396488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.396515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.396703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.396733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.396859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.396899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.397031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.397062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.397258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.397285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.397452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.397482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.397638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.397667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.397850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.397889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.398939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.398969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.399086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.399115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.399242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.399271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.399457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.399483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.399651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.399681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.399866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.399904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.400067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.400098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.400285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.400312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.400431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.400474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.400649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.400678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.400887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.400914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.401068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.401095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.401256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.401285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.401443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.401472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.401606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.115 [2024-07-15 07:03:40.401636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.115 qpair failed and we were unable to recover it. 00:34:53.115 [2024-07-15 07:03:40.401783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.401828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.401976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.402175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.402350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.402545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.402688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.402888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.402918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.403102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.403288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.403475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.403664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.403812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.403978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.404121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.404311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.404496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.404695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.404849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.404905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.405025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.405054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.405222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.405251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.405439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.405466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.405654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.405683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.405852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.405886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.406918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.406944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.407092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.407120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.407263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.407289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.407480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.116 [2024-07-15 07:03:40.407510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.116 qpair failed and we were unable to recover it. 00:34:53.116 [2024-07-15 07:03:40.407669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.407698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.407832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.407862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.408933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.408964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.409938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.409969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.410138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.410166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.410312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.410339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.410513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.410541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.410659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.410703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.410838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.410868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.411969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.411996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.412159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.412189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.412322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.412352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.412513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.412542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.412680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.412706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.412855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.412891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.413077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.413105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.413260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.413289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.413434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.413461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.413632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.413676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.413840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.413870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.117 [2024-07-15 07:03:40.414937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.117 [2024-07-15 07:03:40.414965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.117 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.415110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.415137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.415325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.415355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.415506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.415536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.415691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.415717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.415875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.415925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.416048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.416077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.416262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.416290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.416431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.416457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.416603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.416629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.416826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.416860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.417906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.417933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.418948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.418978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.419136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.419165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.419309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.419337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.419462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.419488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.419656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.419686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.419869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.419921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.420086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.420112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.420253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.420296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.420427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.420456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.420631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.420658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.420805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.420832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.421968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.421998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.422153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.422183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.422330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.118 [2024-07-15 07:03:40.422357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.118 qpair failed and we were unable to recover it. 00:34:53.118 [2024-07-15 07:03:40.422472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.422499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.422667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.422696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.422889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.422935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.423108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.423134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.423349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.423405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.423530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.423559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.423723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.423750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.423894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.423940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.424954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.424981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.425120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.425147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.425297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.425324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.425444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.425488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.425620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.425650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.425803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.425832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.426958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.426985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.427108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.427134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.427280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.427306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.427481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.427511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.427699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.427728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.427920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.427948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.428921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.428948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.429098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.429143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.429301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.429330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.429487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.429517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.429675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.429701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.119 [2024-07-15 07:03:40.429873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.119 qpair failed and we were unable to recover it. 00:34:53.119 [2024-07-15 07:03:40.430053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.430234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.430404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.430552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.430705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.430888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.430917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.431092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.431246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.431446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.431645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.431822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.431975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.432166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.432348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.432509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.432660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.432804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.432831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.433960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.433988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.434129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.434159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.434314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.434344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.434496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.434525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.434715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.434741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.434904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.434935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.120 [2024-07-15 07:03:40.435937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.120 [2024-07-15 07:03:40.435967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.120 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.436162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.436187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.436353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.436381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.436513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.436543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.436677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.436707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.436870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.436904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.437095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.437269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.437483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.437654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.437803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.437971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.438154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.438328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.438477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.438688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.438862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.438897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.439919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.439949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.440110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.440139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.440298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.440327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.440489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.440516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.440698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.440727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.440858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.440897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.441082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.441108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.441252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.441279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.441436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.441477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.441626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.441654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.441814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.441843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.442055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.442083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.442216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.442268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.442453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.442484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.442637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.442666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.442827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.442854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.443002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c4390 is same with the state(5) to be set 00:34:53.121 [2024-07-15 07:03:40.443205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.443249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.443411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.121 [2024-07-15 07:03:40.443443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.121 qpair failed and we were unable to recover it. 00:34:53.121 [2024-07-15 07:03:40.443612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.443639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.443826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.443855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.444911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.444940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.445127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.445154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.445301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.445327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.445512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.445542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.445699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.445728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.445899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.445925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.446074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.446101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.446268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.446298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.446469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.446495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.446640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.446668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.446868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.446903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.447873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.447911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.448105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.448264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.448429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.448649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.448839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.448987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.449157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.449366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.449527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.449744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.449942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.449974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.450134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.450163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.450322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.450348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.450552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.122 [2024-07-15 07:03:40.450604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.122 qpair failed and we were unable to recover it. 00:34:53.122 [2024-07-15 07:03:40.450762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.450791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.450990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.451148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.451299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.451469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.451632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.451797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.451826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.452935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.452967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.453131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.453157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.453294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.453338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.453465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.453494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.453660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.453687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.453809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.453853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.454003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.454043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.454196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.454223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.454389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.454417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.454594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.454644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.454810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.454838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.455021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.455048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.455240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.455311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.455477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.455504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.455649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.455674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.455802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.455831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.456008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.456036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.456159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.456203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.456473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.456531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.456687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.456714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.456903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.456934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.457059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.457087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.457275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.457301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.457461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.457490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.457620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.123 [2024-07-15 07:03:40.457648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.123 qpair failed and we were unable to recover it. 00:34:53.123 [2024-07-15 07:03:40.457807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.457832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.457974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.458146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.458347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.458515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.458737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.458900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.458928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.459119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.459148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.459337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.459387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.459575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.459601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.459765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.459793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.459975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.460005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.460149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.460176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.460370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.460399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.460570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.460596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.460766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.460795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.460995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.461141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.461286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.461453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.461684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.461901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.461928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.462123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.462152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.462311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.462340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.462496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.462521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.462711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.462739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.462929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.462959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.463920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.463949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.464117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.464149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.464318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.464348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.464508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.464538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.464703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.124 [2024-07-15 07:03:40.464730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.124 qpair failed and we were unable to recover it. 00:34:53.124 [2024-07-15 07:03:40.464885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.464911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.465083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.465280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.465464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.465625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.465818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.465989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.466163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.466341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.466553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.466766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.466965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.466994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.467117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.467162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.467353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.467382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.467572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.467598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.467758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.467787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.467933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.467963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.468129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.468155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.468320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.468349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.468526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.468553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.468724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.468750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.468916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.468946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.469100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.469129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.469294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.469321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.469443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.469486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.469670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.469699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.469836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.469862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.125 qpair failed and we were unable to recover it. 00:34:53.125 [2024-07-15 07:03:40.470887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.125 [2024-07-15 07:03:40.470931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.471043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.471071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.471212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.471242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.471441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.471468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.471632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.471666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.471821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.471851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.472913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.472940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.473083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.473109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.473272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.473299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.473477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.473506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.473640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.473666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.473835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.473886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.474949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.474977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.475125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.475152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.475294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.475321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.475480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.475510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.475657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.475687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.475896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.475924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.476954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.476982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.477090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.477132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.477286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.477315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.477474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.126 [2024-07-15 07:03:40.477500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.126 qpair failed and we were unable to recover it. 00:34:53.126 [2024-07-15 07:03:40.477646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.477672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.477793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.477820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.477958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.477986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.478133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.478163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.478294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.478325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.478493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.478519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.478664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.478711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.478896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.478941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.479097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.479125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.479305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.479355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.479549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.479576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.479721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.479747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.479901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.479929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.480042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.480068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.480237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.480264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.480418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.480447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.480630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.480659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.480822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.480848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.481955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.481983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.482136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.482177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.482322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.482351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.482517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.482544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.482700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.482730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.482864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.482903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.483097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.483124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.483290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.483319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.483472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.483502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.483687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.483715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.483900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.483930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.484051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.484085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.484246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.484272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.484380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.484407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.484604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.484633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.484763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.484809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.127 qpair failed and we were unable to recover it. 00:34:53.127 [2024-07-15 07:03:40.485001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.127 [2024-07-15 07:03:40.485028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.485153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.485180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.485319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.485346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.485503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.485532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.485678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.485707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.485924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.486084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.486113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.486288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.486315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.486477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.486504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.486669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.486700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.486863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.486901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.487093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.487119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.487281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.487311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.487494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.487524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.487707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.487737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.487902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.487947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.488111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.488281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.488470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.488626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.488830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.488993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.489020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.489190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.489233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.489427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.489453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.489628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.489658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.489779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.489809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.490934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.490961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.491077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.491103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.491243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.491269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.491439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.491467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.491632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.491666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.491861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.128 [2024-07-15 07:03:40.491898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.128 qpair failed and we were unable to recover it. 00:34:53.128 [2024-07-15 07:03:40.492060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.492086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.492277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.492306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.492460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.492491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.492652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.492679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.492824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.492850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.493907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.493934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.494087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.494304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.494490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.494676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.494831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.494986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.495820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.495986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.496129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.496312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.496477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.496618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.496846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.496873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.129 qpair failed and we were unable to recover it. 00:34:53.129 [2024-07-15 07:03:40.497892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.129 [2024-07-15 07:03:40.497922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.498111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.498138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.498326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.498355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.498511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.498541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.498706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.498733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.498852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.498907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.499856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.499920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.500099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.500293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.500483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.500675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.500865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.500993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.501199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.501382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.501527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.501719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.501910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.501938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.502098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.502128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.502255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.502285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.502477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.502503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.502663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.502692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.502864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.502897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.503907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.503938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.504129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.504293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.504502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.504658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.504795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.504985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.130 [2024-07-15 07:03:40.505025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.130 qpair failed and we were unable to recover it. 00:34:53.130 [2024-07-15 07:03:40.505190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.505216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.505402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.505431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.505588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.505617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.505751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.505786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.505990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.506156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.506355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.506502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.506676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.506867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.506900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.507061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.507090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.507251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.507280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.507470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.507497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.507663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.507692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.507851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.507894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.508081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.508225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.508401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.508615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.508779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.508977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.509142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.509316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.509486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.509728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.509948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.509980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.510133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.510166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.510309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.510336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.510480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.510522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.510654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.510684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.510873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.510907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.511072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.511102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.511260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.511289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.511455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.511481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.511607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.511634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.511802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.511846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.512016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.512044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.512200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.512231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.512389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.512419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.512609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.512635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.131 qpair failed and we were unable to recover it. 00:34:53.131 [2024-07-15 07:03:40.512745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.131 [2024-07-15 07:03:40.512790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.512975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.513005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.513174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.513205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.513394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.513423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.513606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.513635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.513801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.513831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.513981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.514816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.514979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.515161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.515381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.515572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.515795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.515966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.515993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.516149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.516178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.516328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.516358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.516525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.516551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.516714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.516743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.516866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.516904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.517071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.517098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.517243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.517270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.517464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.517493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.517625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.517651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.517820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.517865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.518029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.518058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.518246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.518272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.518430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.518459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.518610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.518639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.518804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.518830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.519954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.519982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.520123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.520165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.520296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.132 [2024-07-15 07:03:40.520326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.132 qpair failed and we were unable to recover it. 00:34:53.132 [2024-07-15 07:03:40.520512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.520549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.520707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.520736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.520924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.520954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.521119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.521147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.521282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.521331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.521490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.521516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.521658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.521685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.521870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.521908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.522951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.522982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.523138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.523167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.523335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.523361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.523550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.523579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.523735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.523763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.523974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.524168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.524379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.524574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.524742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.524938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.524969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.525132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.525158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.525296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.525338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.525528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.525567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.525704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.525730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.525917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.525948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.526104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.526134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.526302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.526329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.526520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.526550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.526738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.526767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.526928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.526955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.527126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.527170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.527330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.527356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.527527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.527560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.527695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.527726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.527897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.527925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.528071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.528098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.133 qpair failed and we were unable to recover it. 00:34:53.133 [2024-07-15 07:03:40.528237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.133 [2024-07-15 07:03:40.528263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.528404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.528432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.528568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.528595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.528752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.528781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.528964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.529165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.529350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.529523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.529720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.529861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.529912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.530074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.530105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.530256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.530283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.530435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.530478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.530671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.530700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.530869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.530904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.531124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.531311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.531513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.531679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.531825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.531995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.532138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.532333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.532519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.532732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.532910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.532940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.533134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.533161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.533320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.533350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.533534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.533564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.533698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.533725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.533874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.533908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.534039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.534069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.534233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.534260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.134 [2024-07-15 07:03:40.534407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.134 [2024-07-15 07:03:40.534433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.134 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.534550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.534578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.534747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.534774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.534937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.534967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.535116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.535312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.535460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.535624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.535818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.535983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.536161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.536334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.536554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.536738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.536935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.536962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.537123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.537152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.537324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.537351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.537466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.537493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.537686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.537715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.537867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.537905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.538066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.538094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.538242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.538269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.538383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.538410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.538577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.538603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.538799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.538828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.539823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.539851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.540054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.540239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.540418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.540609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.540798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.540992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.541023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.541151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.541182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.541373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.541400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.541561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.541590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.541768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.135 [2024-07-15 07:03:40.541798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.135 qpair failed and we were unable to recover it. 00:34:53.135 [2024-07-15 07:03:40.541962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.541990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.542156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.542186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.542338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.542368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.542498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.542531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.542724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.542753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.542901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.542931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.543066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.543093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.543253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.543283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.543440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.543469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.543628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.543655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.543800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.543831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.544950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.544993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.545179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.545208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.545341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.545369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.545536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.545581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.545737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.545767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.545928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.545955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.546096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.546141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.546293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.546323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.546521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.546548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.546669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.546695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.546838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.546866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.547102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.547275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.547456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.547644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.547827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.547994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.548025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.548218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.548245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.548407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.548437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.548589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.548619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.548779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.548806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.548975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.549007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.549131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.549161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.549322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.549349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.549471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.136 [2024-07-15 07:03:40.549498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.136 qpair failed and we were unable to recover it. 00:34:53.136 [2024-07-15 07:03:40.549645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.549672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.549791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.549818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.549934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.549961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.550131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.550160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.550322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.550350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.550512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.550542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.550703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.550732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.550902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.550930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.551095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.551125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.551284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.551318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.551450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.551478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.551645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.551688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.551868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.551902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.552918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.552948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.553087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.553114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.553284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.553310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.553466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.553496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.553690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.553717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.553889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.553919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.554950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.554995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.555157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.555186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.555373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.555400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.555515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.555559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.555684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.555713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.555857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.555892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.556081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.556110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.556261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.556291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.556459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.556485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.556621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.556670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.556852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.556891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.137 [2024-07-15 07:03:40.557038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.137 [2024-07-15 07:03:40.557066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.137 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.557207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.557249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.557419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.557446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.557569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.557597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.557782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.557812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.557994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.558219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.558410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.558572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.558770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.558952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.558984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.559150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.559178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.559315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.559342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.559486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.559513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.559699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.559729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.559891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.559918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.560918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.560946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.561136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.561166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.561355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.561385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.561583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.561610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.561779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.561809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.561966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.561998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.562139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.562166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.562302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.562328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.562528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.562561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.562725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.562751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.562941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.562971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.563123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.563153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.563316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.563343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.563482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.138 [2024-07-15 07:03:40.563524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.138 qpair failed and we were unable to recover it. 00:34:53.138 [2024-07-15 07:03:40.563648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.563678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.563872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.563925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.564099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.564294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.564457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.564645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.564826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.564985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.565159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.565395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.565565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.565745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.565955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.565986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.566146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.566173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.566372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.566408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.566573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.566603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.566792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.566819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.566966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.566993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.567166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.567192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.567358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.567385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.567500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.567527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.567693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.567736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.567884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.567913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.568096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.568126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.568317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.568348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.568484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.568511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.568684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.568710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.568922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.568949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.569097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.569124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.569277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.569306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.569464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.569493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.569621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.569648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.569792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.569819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.570039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.570067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.570240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.570267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.570425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.570454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.570640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.570669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.570807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.570834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.571007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.571034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.571199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.571228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.571379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.139 [2024-07-15 07:03:40.571405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.139 qpair failed and we were unable to recover it. 00:34:53.139 [2024-07-15 07:03:40.571589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.571623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.571769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.571799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.571988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.572183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.572346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.572524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.572691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.572918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.572948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.573111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.573138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.573325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.573354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.573512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.573538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.573681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.573708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.573895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.573926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.574112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.574328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.574502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.574642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.574838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.574975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.575149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.575323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.575515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.575698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.575922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.575950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.576140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.576170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.576319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.576348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.576501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.576528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.576679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.576706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.576906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.576936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.577105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.577131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.577248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.577290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.577462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.577488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.577627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.577654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.577838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.577868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.578951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.578981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.140 [2024-07-15 07:03:40.579143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.140 [2024-07-15 07:03:40.579177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.140 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.579314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.579341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.579491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.579518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.579713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.579743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.579903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.579932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.580117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.580146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.580275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.580305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.580451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.580478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.580663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.580692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.580815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.580846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.581939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.581972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.582154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.582181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.582330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.582357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.582529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.582558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.582705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.582731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.582884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.582911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.583128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.583301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.583489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.583672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.583862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.583998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.584215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.584409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.584582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.584728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.584911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.584939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.585071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.585102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.585279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.585306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.585450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.585477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.585667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.585697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.585828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.585859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.586059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.586086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.586280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.586310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.586463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.586493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.586659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.586690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.141 qpair failed and we were unable to recover it. 00:34:53.141 [2024-07-15 07:03:40.586834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.141 [2024-07-15 07:03:40.586893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.587860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.587896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.588896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.588941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.589114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.589144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.589308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.589336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.589524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.589553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.589690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.589720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.589885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.589912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.590953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.590980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.591116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.591143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.591288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.591315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.591477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.591504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.591645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.591688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.591858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.591892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.592965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.592995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.142 qpair failed and we were unable to recover it. 00:34:53.142 [2024-07-15 07:03:40.593162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.142 [2024-07-15 07:03:40.593189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.593380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.593409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.593564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.593604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.593772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.593799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.593922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.593972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.594103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.594133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.594303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.594329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.594442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.594484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.594641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.594670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.594830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.594857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.595954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.595981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.596139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.596168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.596322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.596351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.596496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.596523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.596650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.596676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.596857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.596896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.597836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.597866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.598848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.598991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.599972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.599999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.600117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.143 [2024-07-15 07:03:40.600143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.143 qpair failed and we were unable to recover it. 00:34:53.143 [2024-07-15 07:03:40.600289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.600316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.600509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.600654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.600696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.600873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.600912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.601037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.601064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.601239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.601269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.601418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.601448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.601616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.601643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.601788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.601833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.602935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.602979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.603144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.603174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.603347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.603373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.603526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.603553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.603693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.603724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.603913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.603940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.604127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.604317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.604490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.604651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.604848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.604990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.605166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.605354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.605549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.605714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.605886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.605916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.606920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.606948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.607103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.607130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.607249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.607276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.607416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.607443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.144 [2024-07-15 07:03:40.607590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.144 [2024-07-15 07:03:40.607616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.144 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.607766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.607801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.607956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.607987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.608145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.608182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.608293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.608337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.608498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.608528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.608665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.608692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.608838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.608888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.609892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.609917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.610919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.610964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.611123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.611153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.611309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.611336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.611518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.611547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.611668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.611697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.611828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.611856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.612892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.612920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.613123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.613277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.613490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.613657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.613837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.613959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.614004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.614132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.614162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.614307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.614333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.145 qpair failed and we were unable to recover it. 00:34:53.145 [2024-07-15 07:03:40.614518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.145 [2024-07-15 07:03:40.614547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.614698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.614731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.614977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.615189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.615388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.615574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.615746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.615907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.615937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.616102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.616129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.616316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.616346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.616503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.616533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.616701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.616727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.616899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.616926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.617072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.617102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.617265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.617292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.617454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.617483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.617633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.617662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.617845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.617873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.618927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.618955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.619914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.619942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.620210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.620370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.620537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.620688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.620832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.620978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.621124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.621332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.621505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.621652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.146 [2024-07-15 07:03:40.621850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.146 [2024-07-15 07:03:40.621889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.146 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.622912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.622940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.623075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.623253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.623444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.623618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.623832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.623970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.624930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.624958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.625959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.625989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.626137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.626165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.626313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.626358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.626489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.626519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.626721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.626748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.626915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.626947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.627112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.627143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.627303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.627330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.627473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.627518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.627686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.627713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.627887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.147 [2024-07-15 07:03:40.627915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.147 qpair failed and we were unable to recover it. 00:34:53.147 [2024-07-15 07:03:40.628078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.628108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.628293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.628322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.628505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.628535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.628734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.628764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.628926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.628956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.629944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.629974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.630165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.630191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.630342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.630373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.630507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.630536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.630703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.630729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.630843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.630896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.631109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.631295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.631495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.631680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.631829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.631996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.632148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.632317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.632489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.632652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.632828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.632855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.633825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.633852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.634967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.634993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.635113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.635140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.148 [2024-07-15 07:03:40.635264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.148 [2024-07-15 07:03:40.635291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.148 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.635408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.635436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.635581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.635626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.635754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.635783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.635954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.635982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.636139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.636170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.636319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.636349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.636518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.636544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.636702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.636736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.636896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.636927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.637074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.637101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.637248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.637293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.637454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.637483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.637678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.637704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.637842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.637868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.638950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.638995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.639182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.639213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.639386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.639413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.639527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.639554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.639703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.639730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.639889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.639916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.640040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.640068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.640255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.640284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.640443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.640470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.640696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.640726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.640858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.640909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.641895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.641925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.642086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.642113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.642224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.642267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.642419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.642449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.642611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.149 [2024-07-15 07:03:40.642637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.149 qpair failed and we were unable to recover it. 00:34:53.149 [2024-07-15 07:03:40.642778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.642823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.642981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.643147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.643285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.643483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.643699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.643889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.643918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.644136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.644275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.644479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.644668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.644852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.644984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.645125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.645312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.645476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.645640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.645859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.645901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.646972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.646999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.647184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.647213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.647350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.647379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.647521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.647547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.647672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.647698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.647846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.647873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.648899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.648927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.649899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.150 [2024-07-15 07:03:40.649929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.150 qpair failed and we were unable to recover it. 00:34:53.150 [2024-07-15 07:03:40.650097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.650293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.650483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.650638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.650775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.650955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.650986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.651117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.651147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.651314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.651357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.651495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.651525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.651688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.651714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.651901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.651931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.652061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.652090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.652273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.652299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.652444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.652470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.652653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.652682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.652819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.652845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.653923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.653950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.654829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.654855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.655949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.151 [2024-07-15 07:03:40.655977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.151 qpair failed and we were unable to recover it. 00:34:53.151 [2024-07-15 07:03:40.656116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.656142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.656258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.656284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.656494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.656521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.656634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.656677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.656843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.656871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.657060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.657244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.657458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.657647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.657786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.657977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.658145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.658343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.658543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.658742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.658900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.658946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.659073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.659103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.659241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.659267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.659460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.659490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.659669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.659698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.659870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.659906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.660941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.660969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.661114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.661140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.661305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.661335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.661471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.661498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.661669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.661696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.661854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.661893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.662055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.662082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.662268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.662297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.662478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.662518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.662695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.662723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.662871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.662905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.663093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.663123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.663251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.663277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.663419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.663446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.152 qpair failed and we were unable to recover it. 00:34:53.152 [2024-07-15 07:03:40.663611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.152 [2024-07-15 07:03:40.663641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.663805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.663831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.664892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.664920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.665048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.665079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.665246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.665275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.665439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.665466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.665623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.665653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.665835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.665865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.666927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.666957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.667127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.667153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.667331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.667358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.667535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.667562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.667669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.667697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.667831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.667857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.668906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.668951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.669116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.669146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.669316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.669343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.669485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.669511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.669677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.669706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.669842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.669870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.670042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.670068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.670239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.670268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.670423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.670450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.670633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.670662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.670816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.670850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.671034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.153 [2024-07-15 07:03:40.671062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.153 qpair failed and we were unable to recover it. 00:34:53.153 [2024-07-15 07:03:40.671206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.671233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.671375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.671402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.671551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.671579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.671692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.671718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.671841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.671868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.672918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.672948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.673116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.673142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.673298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.673328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.673488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.673519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.673685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.673712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.673897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.673927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.674969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.674996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.675143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.675282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.675452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.675618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.675812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.675970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.676196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.676365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.676582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.676791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.676965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.676993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.677133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.677159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.677324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.677354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.677518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.677546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.677735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.677765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.677950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.677981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.678117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.678148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.678291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.678317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.678485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.678515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.678652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.678679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.154 [2024-07-15 07:03:40.678863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.154 [2024-07-15 07:03:40.678901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.154 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.679941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.679969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.680155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.680184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.680340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.680366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.680503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.680545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.680713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.680742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.680887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.680914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.681087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.681114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.681273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.681302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.681467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.681493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.681639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.681683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.681865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.681918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.682933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.682964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.683127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.683155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.683312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.683341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.683496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.683525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.683650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.683677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.683848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.683896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.684897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.684924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.685065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.685091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.685232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.685259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.685425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.685459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.155 [2024-07-15 07:03:40.685606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.155 [2024-07-15 07:03:40.685635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.155 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.685765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.685792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.685940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.685983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.686104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.686133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.686292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.686318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.686503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.686532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.686686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.686715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.686889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.686915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.687059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.687085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.687281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.687310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.687443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.687469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.687611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.687653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.687833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.687862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.688045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.688072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.688258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.688286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.688434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.688463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.688603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.688629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.688811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.688840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.689946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.689976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.690159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.690188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.690351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.690377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.690495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.690521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.690669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.690696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.156 [2024-07-15 07:03:40.690862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.156 [2024-07-15 07:03:40.690896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.156 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.691022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.691051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.691212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.691241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.691407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.691434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.691625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.691655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.691813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.691843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.692906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.692940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.693134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.693161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.693316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.693346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.693505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.693535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.693677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.693704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.693841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.693893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.694962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.694992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.695145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.695174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.695316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.695343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.695520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.695547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.695664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.695691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.695859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.695893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.696874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.696909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.697101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.697130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.697286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.697316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.697482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.697509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.697655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.697681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.697855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.697888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.698070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.698097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.157 [2024-07-15 07:03:40.698255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.157 [2024-07-15 07:03:40.698285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.157 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.698472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.698501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.698666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.698693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.698857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.698896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.699058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.699089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.699228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.699255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.699423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.699450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.699599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.699628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.699793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.699820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.700009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.700040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.700188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.700217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.158 [2024-07-15 07:03:40.700361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.158 [2024-07-15 07:03:40.700391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.158 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.700568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.700595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.700779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.700807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.700955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.700983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.701119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.701310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.701500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.701650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.701834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.701986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.702015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.443 [2024-07-15 07:03:40.702202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.443 [2024-07-15 07:03:40.702231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.443 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.702364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.702395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.702533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.702560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.702726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.702752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.702889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.702919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.703922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.703949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.704124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.704150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.704265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.704308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.704491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.704520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.704678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.704704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.704823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.704865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.705968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.705998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.706163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.706190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.706330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.706357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.706499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.706526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.706695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.706721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.706869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.706903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.707038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.707068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.707247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.707276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.707440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.707466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.707612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.707661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.707842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.707872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.708069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.708097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.708260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.708295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.708452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.708482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.708643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.708669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.708856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.708904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.709064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.709095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.709237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.709264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.709389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.709416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.444 qpair failed and we were unable to recover it. 00:34:53.444 [2024-07-15 07:03:40.709556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.444 [2024-07-15 07:03:40.709582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.709728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.709754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.709909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.709940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.710928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.710955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.711111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.711140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.711304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.711330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.711445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.711486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.711676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.711706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.711866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.711898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.712072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.712245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.712440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.712640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.712822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.712973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.713187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.713343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.713517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.713685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.713886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.713917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.714112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.714139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.714303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.714332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.714480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.714509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.714671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.714698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.714850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.714888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.715923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.715954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.716145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.716174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.716332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.716359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.716494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.716520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.716684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.445 [2024-07-15 07:03:40.716714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.445 qpair failed and we were unable to recover it. 00:34:53.445 [2024-07-15 07:03:40.716850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.716894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.717950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.717977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.718121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.718148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.718331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.718360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.718525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.718553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.718740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.718770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.718956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.718986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.719123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.719158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.719309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.719356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.719516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.719545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.719685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.719711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.719849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.719883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.720107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.720152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.720355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.720384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.720569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.720599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.720757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.720787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.720952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.720979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.721168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.721198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.721446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.721499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.721656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.721692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.721852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.721890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.722970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.722997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.723167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.723210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.723374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.723400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.723557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.723586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.723774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.723803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.723935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.723963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.724133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.724178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.724303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.724333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.724523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.724549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.446 [2024-07-15 07:03:40.724673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.446 [2024-07-15 07:03:40.724699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.446 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.724814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.724841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.724998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.725169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.725370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.725559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.725731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.725941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.725972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.726141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.726169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.726315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.726360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.726516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.726545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.726707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.726733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.726904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.726934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.727085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.727114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.727276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.727302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.727446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.727489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.727687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.727714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.727862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.727895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.728098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.728285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.728501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.728665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.728833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.728976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.729150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.729354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.729496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.729638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.729810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.729840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.730042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.730070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.730236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.730272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.730503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.730557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.730725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.730754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.730945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.730975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.731106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.731135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.731311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.731338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.731496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.731527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.731644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.731674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.731809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.731834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.447 [2024-07-15 07:03:40.732007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.447 [2024-07-15 07:03:40.732034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.447 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.732189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.732219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.732382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.732409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.732551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.732593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.732749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.732779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.732948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.732976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.733126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.733153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.733325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.733350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.733456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.733483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.733646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.733694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.733849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.733886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.734076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.734300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.734514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.734687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.734837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.734996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.735027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.735196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.735223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.735425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.735455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.735595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.735624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.735786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.735814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.735990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.736213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.736404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.736540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.736730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.736920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.736948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.737084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.737110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.737274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.737303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.737519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.737629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.737672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.737852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.737895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.738027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.738054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.448 [2024-07-15 07:03:40.738240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.448 [2024-07-15 07:03:40.738270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.448 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.738392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.738421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.738586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.738612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.738778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.738808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.738963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.738993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.739159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.739186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.739373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.739402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.739530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.739560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.739755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.739782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.739971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.740161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.740379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.740594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.740785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.740935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.740963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.741085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.741112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.741316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.741346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.741539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.741566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.741753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.741783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.741969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.741998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.742161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.742187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.742305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.742347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.742500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.742528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.742696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.742722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.742864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.742899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.743088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.743118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.743275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.743302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.743491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.743520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.743676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.743704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.743892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.743920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.744962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.744990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.745133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.745178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.745364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.745394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.745552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.745582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.745734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.745762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.745950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.449 [2024-07-15 07:03:40.745980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.449 qpair failed and we were unable to recover it. 00:34:53.449 [2024-07-15 07:03:40.746136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.746162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.746277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.746303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.746446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.746473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.746642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.746668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.746831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.746859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.746999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.747189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.747374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.747560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.747770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.747932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.747962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.748139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.748165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.748313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.748340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.748483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.748509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.748650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.748676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.748845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.748872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.749015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.749046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.749219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.749248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.749412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.749438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.749599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.749628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.749817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.749844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.750938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.750968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.751134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.751162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.751347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.751376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.751528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.751558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.751718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.751745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.751895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.751940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.752134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.752160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.752288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.752315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.752459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.752485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.752655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.752680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.752830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.752857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.753037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.753072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.753207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.753236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.753427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.753453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.450 qpair failed and we were unable to recover it. 00:34:53.450 [2024-07-15 07:03:40.753584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.450 [2024-07-15 07:03:40.753616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.753773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.753803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.753950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.753977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.754101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.754128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.754290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.754320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.754476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.754502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.754644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.754688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.754843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.754872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.755049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.755075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.755260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.755289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.755438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.755467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.755638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.755666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.755809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.755853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.756056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.756085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.756250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.756276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.756475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.756505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.756656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.756686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.756891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.756928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.757118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.757312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.757480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.757651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.757841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.757983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.758179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.758370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.758529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.758723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.758932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.758964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.759126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.759152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.759319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.759347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.759529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.759559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.759701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.759729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.759903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.759933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.760089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.760118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.760306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.760333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.760492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.760522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.760706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.760733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.760904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.760931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.761047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.761076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.451 qpair failed and we were unable to recover it. 00:34:53.451 [2024-07-15 07:03:40.761222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.451 [2024-07-15 07:03:40.761248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.761452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.761478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.761621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.761648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.761817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.761843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.762906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.762933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.763077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.763105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.763300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.763330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.763515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.763545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.763712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.763739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.763928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.763959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.764145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.764174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.764338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.764364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.764499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.764543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.764708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.764738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.764886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.764914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.765029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.765055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.765255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.765284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.765454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.765480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.765598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.765625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.765794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.765840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.766922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.766953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.767117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.767144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.767301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.767332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.767479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.452 [2024-07-15 07:03:40.767508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.452 qpair failed and we were unable to recover it. 00:34:53.452 [2024-07-15 07:03:40.767694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.767721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.767838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.767887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.768918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.768949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.769133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.769164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.769330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.769357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.769516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.769546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.769696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.769725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.769888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.769916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.770104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.770134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.770301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.770329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.770476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.770503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.770619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.770648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.770796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.770824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.771954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.771983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.772125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.772152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.772306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.772336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.772520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.772550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.772684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.772711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.772859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.772911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.773070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.773101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.773271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.773302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.773461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.773490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.773684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.773713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.773884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.773913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.774111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.774324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.774486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.774653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.774831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.774997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.775025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.453 qpair failed and we were unable to recover it. 00:34:53.453 [2024-07-15 07:03:40.775163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.453 [2024-07-15 07:03:40.775209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.775362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.775392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.775559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.775587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.775739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.775766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.775894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.775923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.776065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.776093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.776211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.776254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.776389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.776421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.776614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.776641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.776809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.776839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.777039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.777071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.777197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.777225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.777364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.777391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.777588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.777618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.777784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.777812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.778899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.778926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.779071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.779099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.779270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.779301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.779470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.779498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.779665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.779692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.779856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.779895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.780065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.780092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.780237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.780282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.780429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.780460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.780616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.780643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.780830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.780864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.781043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.781070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.781238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.781265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.781430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.781459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.781620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.781649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.781807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.781834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.782050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.782233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.782417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.782599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.454 [2024-07-15 07:03:40.782792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.454 qpair failed and we were unable to recover it. 00:34:53.454 [2024-07-15 07:03:40.782981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.783176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.783365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.783590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.783779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.783967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.783998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.784172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.784199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.784358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.784388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.784568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.784598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.784740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.784769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.784936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.784963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.785093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.785122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.785290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.785317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.785480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.785508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.785691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.785720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.785887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.785914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.786954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.786982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.787103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.787130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.787304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.787334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.787503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.787529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.787719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.787748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.787934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.787965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.788134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.788161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.788317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.788346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.788502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.788536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.788676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.788703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.788860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.788904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.789077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.789255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.789473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.789635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.789827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.789989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.790020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.790209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.790239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.790401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.790428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.455 [2024-07-15 07:03:40.790570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.455 [2024-07-15 07:03:40.790614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.455 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.790769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.790799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.790966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.790994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.791156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.791185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.791316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.791345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.791505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.791532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.791720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.791750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.791943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.791974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.792150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.792176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.792362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.792392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.792581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.792608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.792753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.792780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.792927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.792954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.793094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.793120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.793284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.793310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.793496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.793526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.793688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.793718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.793872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.793906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.794028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.794070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.794226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.794256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.794421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.794448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.794613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.794644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.794806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.794836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.795962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.795989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.796131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.796162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.796324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.796354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.796513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.796543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.796699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.796726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.796845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.796904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.797080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.456 [2024-07-15 07:03:40.797110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.456 qpair failed and we were unable to recover it. 00:34:53.456 [2024-07-15 07:03:40.797291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.797318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.797479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.797511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.797644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.797674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.797836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.797864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.798035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.798066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.798223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.798251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.798415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.798441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.798598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.798629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.798816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.798846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.799017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.799044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.799187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.799232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.799433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.799460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.799603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.799630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.799816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.799845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.800907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.800953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.801139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.801167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.801318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.801344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.801506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.801535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.801694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.801724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.801864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.801899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.802862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.802896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.803036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.803063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.803201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.803244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.803411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.803437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.803605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.803653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.803841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.803869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.804040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.804069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.804216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.804261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.804380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.804409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.804571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.457 [2024-07-15 07:03:40.804597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.457 qpair failed and we were unable to recover it. 00:34:53.457 [2024-07-15 07:03:40.804788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.804818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.805025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.805053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.805223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.805250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.805388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.805416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.805572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.805602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.805795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.805822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.806855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.806984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.807028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.807191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.807220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.807413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.807439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.807573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.807602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.807784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.807814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.808010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.808037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.808230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.808259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.808423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.808453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.808642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.808669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.808862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.808911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.809071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.809102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.809264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.809290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.809451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.809481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.809636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.809667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.809857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.809893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.810063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.810092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.810249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.810279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.810447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.810474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.810618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.810661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.810822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.810851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.811967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.811995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.812132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.812158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.812327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.812356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.458 [2024-07-15 07:03:40.812510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.458 [2024-07-15 07:03:40.812541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.458 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.812711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.812737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.812905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.812932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.813069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.813098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.813289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.813315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.813510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.813540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.813700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.813730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.813902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.813929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.814103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.814130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.814305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.814335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.814525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.814553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.814683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.814713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.814884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.814915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.815965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.815996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.816185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.816212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.816375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.816406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.816597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.816625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.816757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.816801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.816969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.817156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.817293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.817468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.817702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.817900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.817927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.818063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.818090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.818269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.818299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.818488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.818513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.818702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.818731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.818889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.818920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.819084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.819117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.819306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.819336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.819495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.819524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.819686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.819713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.819909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.819940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.820098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.820127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.820314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.459 [2024-07-15 07:03:40.820341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.459 qpair failed and we were unable to recover it. 00:34:53.459 [2024-07-15 07:03:40.820507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.820537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.820698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.820728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.820866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.820907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.821930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.821958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.822128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.822154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.822287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.822316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.822484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.822511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.822700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.822729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.822861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.822897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.823062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.823088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.823249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.823279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.823443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.823474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.823667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.823693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.823858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.823896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.824053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.824083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.824274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.824302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.824491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.824520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.824690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.824719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.824887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.824914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.825080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.825111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.825297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.825327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.825464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.825492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.825658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.825686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.825849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.825900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.826091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.826119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.826272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.826301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.826494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.826521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.826662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.826689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.826847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.826892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.827085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.827115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.827301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.827328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.827448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.827491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.827685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.827711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.827885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.827913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.828087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.828115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.460 [2024-07-15 07:03:40.828272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.460 [2024-07-15 07:03:40.828302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.460 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.828465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.828492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.828628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.828655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.828827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.828856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.829923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.829954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.830119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.830146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.830307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.830337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.830489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.830518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.830681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.830708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.830899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.830928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.831093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.831122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.831279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.831306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.831480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.831510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.831638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.831668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.831804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.831832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.832955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.832983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.833127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.833154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.833325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.833355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.833539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.833567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.833690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.833717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.833836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.833863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.834019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.834047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.834237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.834267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.834391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.834426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.834600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.834628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.461 qpair failed and we were unable to recover it. 00:34:53.461 [2024-07-15 07:03:40.834786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.461 [2024-07-15 07:03:40.834813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.834977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.835191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.835338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.835498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.835697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.835926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.835956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.836922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.836950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.837130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.837160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.837353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.837380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.837505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.837549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.837719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.837749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.837907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.837936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.838123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.838153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.838312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.838341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.838509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.838537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.838679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.838724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.838887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.838918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.839079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.839225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.839435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.839621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.839840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.839979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.840149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.840324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.840498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.840712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.840911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.840942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.841134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.841163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.841308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.841335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.841446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.841474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.841645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.841675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.841843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.841875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.842059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.842090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.842267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.462 [2024-07-15 07:03:40.842294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.462 qpair failed and we were unable to recover it. 00:34:53.462 [2024-07-15 07:03:40.842443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.842468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.842626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.842655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.842816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.842845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.843932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.843963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.844104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.844131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.844333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.844363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.844533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.844562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.844753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.844780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.844937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.844969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.845150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.845177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.845322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.845348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.845534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.845563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.845695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.845726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.845896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.845933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.846057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.846103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.846290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.846319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.846493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.846520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.846634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.846660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.846838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.846867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.847870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.847907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.848092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.848117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.848282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.848310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.848468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.848497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.848658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.848684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.848803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.848830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.463 [2024-07-15 07:03:40.849909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.463 [2024-07-15 07:03:40.849937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.463 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.850123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.850152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.850293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.850319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.850446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.850472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.850661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.850690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.850866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.850901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.851103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.851266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.851484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.851652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.851812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.851989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.852161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.852385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.852546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.852699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.852903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.852934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.853961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.853990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.854132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.854158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.854306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.854349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.854508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.854537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.854669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.854712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.854871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.854945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.855068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.855094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.855260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.855285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.855484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.855513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.855675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.855704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.855850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.855881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.856904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.856932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.857093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.857121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.857253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.464 [2024-07-15 07:03:40.857282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.464 qpair failed and we were unable to recover it. 00:34:53.464 [2024-07-15 07:03:40.857424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.857450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.857645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.857674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.857847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.857874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.858912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.858942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.859080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.859106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.859258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.859303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.859455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.859484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.859631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.859656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.859847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.859883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.860925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.860952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.861064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.861089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.861274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.861300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.861443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.861470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.861637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.861670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.861843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.861869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.862961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.862988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.863097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.863141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.863294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.863323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.863491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.863517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.863684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.465 [2024-07-15 07:03:40.863710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.465 qpair failed and we were unable to recover it. 00:34:53.465 [2024-07-15 07:03:40.863889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.863919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.864050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.864076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.864266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.864294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.864488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.864514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.864660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.864687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.864886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.864916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.865953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.865980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.866117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.866144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.866283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.866308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.866429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.866456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.866639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.866683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.866871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.866908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.867038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.867068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.867221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.867250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.867392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.867418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.867595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.867639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.867825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.867854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.868922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.868952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.869117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.869150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.869308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.869337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.869536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.869562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.869732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.869758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.869945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.869975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.870130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.870158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.870300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.870326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.870440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.870466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.870671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.870700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.870892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.870920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.871055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.871085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.871252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.466 [2024-07-15 07:03:40.871281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.466 qpair failed and we were unable to recover it. 00:34:53.466 [2024-07-15 07:03:40.871443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.871468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.871609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.871654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.871787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.871817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.871993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.872140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.872341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.872529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.872753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.872966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.872996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.873153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.873179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.873299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.873345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.873537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.873567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.873760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.873786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.873958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.873989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.874159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.874188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.874362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.874388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.874542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.874571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.874756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.874785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.874925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.874952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.875101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.875127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.875295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.875325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.875478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.875504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.875686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.875715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.875900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.875928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.876931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.876962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.877154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.877181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.877335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.877364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.877523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.877552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.877718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.877744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.877920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.877951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.878138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.878167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.878329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.878355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.878494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.878538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.878699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.878729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.878894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.878921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.467 [2024-07-15 07:03:40.879087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.467 [2024-07-15 07:03:40.879115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.467 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.879297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.879326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.879521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.879547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.879679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.879709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.879899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.879927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.880095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.880121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.880289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.880318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.880509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.880538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.880709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.880736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.880886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.880914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.881058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.881101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.881298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.881324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.881465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.881494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.881617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.881646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.881804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.881829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.882036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.882214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.882406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.882623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.882781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.882976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.883165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.883385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.883604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.883749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.883916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.883943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.884124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.884150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.884310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.884338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.884521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.884555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.884702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.884729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.884869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.884901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.885053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.885082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.885247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.885273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.885415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.885441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.885602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.885631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.885822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.885848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.886100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.886289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.886471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.886607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.468 [2024-07-15 07:03:40.886803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.468 qpair failed and we were unable to recover it. 00:34:53.468 [2024-07-15 07:03:40.886994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.887020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.887220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.887248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.887405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.887434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.887594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.887620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.887761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.887803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.887986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.888156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.888326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.888554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.888740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.888915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.888942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.889949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.889976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.890125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.890154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.890321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.890347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.890483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.890525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.890719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.890745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.890890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.890917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.891107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.891288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.891480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.891648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.891811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.891980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.892846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.469 [2024-07-15 07:03:40.892993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.469 [2024-07-15 07:03:40.893021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.469 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.893167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.893193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.893362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.893390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.893549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.893575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.893718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.893759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.893947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.893974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.894095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.894121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.894262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.894304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.894471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.894500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.894666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.894692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.894866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.894905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.895038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.895067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.895286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.895451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.895479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.895635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.895665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.895863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.895898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.896060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.896248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.896411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.896620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.896804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.896978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.897151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.897364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.897546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.897694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.897895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.897925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.898117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.898143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.898271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.898301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.898467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.898496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.898659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.898685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.898844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.898872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.899045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.899070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.899240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.899267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.899424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.899457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.899641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.899671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.899863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.899896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.900021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.900046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.900215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.900241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.900354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.900379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.900555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.900581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.470 qpair failed and we were unable to recover it. 00:34:53.470 [2024-07-15 07:03:40.900718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.470 [2024-07-15 07:03:40.900748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.900922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.900949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.901104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.901132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.901286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.901315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.901503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.901530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.901686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.901714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.901901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.901930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.902131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.902157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.902289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.902319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.902513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.902542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.902709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.902734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.902922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.902950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.903106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.903135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.903278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.903304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.903474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.903499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.903668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.903696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.903889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.903932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.904099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.904125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.904306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.904332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.904477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.904502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.904665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.904695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.904853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.904890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.905055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.905248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.905458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.905650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.905838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.905997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.906027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.906220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.906246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.906405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.906435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.906618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.906647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.906802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.906828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.906977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.907023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.907204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.907241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.907429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.907456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.907617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.907645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.907832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.907861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.908061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.908087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.908283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.908312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.908471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.908500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.471 qpair failed and we were unable to recover it. 00:34:53.471 [2024-07-15 07:03:40.908664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.471 [2024-07-15 07:03:40.908689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.908809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.908836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.909041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.909071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.909241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.909267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.909430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.909458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.909613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.909642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.909815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.909841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.910965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.910993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.911162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.911188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.911302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.911329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.911523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.911549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.911670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.911696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.911889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.911919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.912051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.912078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.912222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.912265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.912438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.912464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.912634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.912660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.912818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.912846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.913047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.913076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.913273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.913299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.913483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.913512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.913641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.913671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.913835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.913860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.914088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.914248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.914415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.914608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.914785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.914977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.915147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.915335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.915551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.915702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.915897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.915939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.916097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.916123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.472 [2024-07-15 07:03:40.916269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.472 [2024-07-15 07:03:40.916295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.472 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.916429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.916454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.916595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.916622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.916783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.916813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.916989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.917016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.917159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.917185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.917343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.917373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.917560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.917589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.917775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.917801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.917970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.918135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.918330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.918520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.918675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.918884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.918923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.919062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.919089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.919241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.919269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.919415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.919442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.919586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.919629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.919788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.919818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.920816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.920843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.921016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.921044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.921239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.921269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.921456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.921486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.921648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.921675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.921785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.921813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.922940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.922985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.923149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.923179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.923347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.923373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.923517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.923544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.473 [2024-07-15 07:03:40.923708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.473 [2024-07-15 07:03:40.923738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.473 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.923927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.923955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.924111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.924147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.924304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.924333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.924473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.924500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.924647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.924673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.924840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.924869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.925074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.925102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.925302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.925332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.925515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.925545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.925708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.925734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.925924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.925955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.926136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.926166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.926299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.926326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.926501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.926546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.926700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.926729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.926866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.926905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.927092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.927122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.927295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.927324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.927494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.927520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.927678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.927707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.927861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.927900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.928889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.928927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.929081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.929108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.929302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.929332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.929493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.929523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.929715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.929742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.929930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.929961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.930144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.930175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.930296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.930322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.930507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.474 [2024-07-15 07:03:40.930536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.474 qpair failed and we were unable to recover it. 00:34:53.474 [2024-07-15 07:03:40.930718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.930747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.930918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.930945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.931104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.931138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.931296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.931326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.931492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.931519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.931707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.931736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.931919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.931949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.932143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.932169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.932323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.932352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.932537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.932566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.932718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.932744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.932870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.932902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.933961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.933990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.934148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.934176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.934366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.934393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.934588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.934617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.934771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.934799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.934959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.934986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.935151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.935181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.935350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.935379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.935520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.935545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.935690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.935717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.935859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.935908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.936070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.936096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.936282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.936310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.936465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.936495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.936692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.936719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.936854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.936889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.937952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.937996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.938122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.938150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.475 [2024-07-15 07:03:40.938319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.475 [2024-07-15 07:03:40.938347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.475 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.938493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.938537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.938721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.938751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.938892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.938919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.939045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.939072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.939216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.939242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.939422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.939448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.939590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.939633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.939814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.939842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.940039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.940065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.940248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.940277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.940439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.940468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.940631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.940657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.940775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.940817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.941000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.941030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.941225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.941252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.941409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.941438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.941596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.941625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.941796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.941823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.942960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.942987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.943109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.943153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.943278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.943306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.943444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.943472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.943655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.943685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.943842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.943872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.944057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.944084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.944274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.944304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.944430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.944461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.944627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.944653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.944844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.944873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.945076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.945102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.945271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.945297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.945432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.945468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.945655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.945683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.945852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.945891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.946043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.476 [2024-07-15 07:03:40.946070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.476 qpair failed and we were unable to recover it. 00:34:53.476 [2024-07-15 07:03:40.946263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.946293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.946457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.946483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.946619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.946662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.946800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.946830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.947039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.947067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.947260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.947290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.947449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.947479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.947680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.947706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.947843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.947872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.948037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.948066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.948234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.948260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.948423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.948452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.948632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.948660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.948850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.948884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.949111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.949266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.949429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.949595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.949820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.949989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.950159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.950355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.950499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.950711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.950898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.950928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.951093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.951119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.951242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.951286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.951468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.951497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.951655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.951681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.951801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.951842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.952006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.952037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.952234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.952261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.952425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.952454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.952613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.952642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.952810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.952835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.953937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.477 [2024-07-15 07:03:40.953964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.477 qpair failed and we were unable to recover it. 00:34:53.477 [2024-07-15 07:03:40.954152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.954182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.954340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.954369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.954533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.954560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.954707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.954751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.954930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.954957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.955082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.955108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.955252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.955278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.955421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.955450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.955612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.955638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.955800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.955829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.956034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.956062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.956237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.956263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.956456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.956484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.956628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.956656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.956805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.956832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.957010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.957055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.957212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.957241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.957428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.957454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.957617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.957646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.957806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.957836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.958037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.958064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.958201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.958231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.958394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.958428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.958617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.958644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.958837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.958866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.959925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.959953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.960120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.960151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.960349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.960375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.960541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.960569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.478 [2024-07-15 07:03:40.960698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.478 [2024-07-15 07:03:40.960728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.478 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.960894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.960922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.961095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.961120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.961269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.961299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.961462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.961489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.961673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.961703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.961857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.961893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.962057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.962084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.962228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.962272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.962464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.962493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.962684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.962710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.962870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.962915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.963074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.963108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.963267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.963293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.963486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.963516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.963718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.963744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.963853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.963889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.964080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.964109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.964264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.964292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.964453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.964480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.964662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.964692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.964872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.964910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.965055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.965081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.965255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.965282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.965427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.965457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.965624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.965649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.965832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.965861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.966930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.966960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.967120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.967149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.967315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.967342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.967508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.967538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.967702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.967731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.967924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.967951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.968117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.968147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.968345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.968374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.968567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.479 [2024-07-15 07:03:40.968594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.479 qpair failed and we were unable to recover it. 00:34:53.479 [2024-07-15 07:03:40.968724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.968752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.968888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.968918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.969105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.969132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.969322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.969351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.969530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.969558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.969707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.969735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.969888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.969933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.970070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.970096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.970237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.970263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.970413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.970439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.970630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.970659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.970849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.970875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.971083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.971272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.971488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.971632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.971805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.971989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.972017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.972187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.972228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.972410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.972439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.972600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.972626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.972811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.972840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.973004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.973035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.973220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.973247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.973435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.973465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.973619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.973647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.973812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.973838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.974949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.974976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.975146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.975190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.975314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.975344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.975487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.975515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.975651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.975677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.975847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.975886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.976050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.976076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.976230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.976275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.480 [2024-07-15 07:03:40.976430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.480 [2024-07-15 07:03:40.976460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.480 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.976620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.976647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.976791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.976834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.977057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.977252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.977419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.977639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.977822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.977974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.978001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.978168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.978199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.978358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.978385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.978564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.978608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.978765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.978794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.978975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.979175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.979364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.979578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.979769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.979956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.979985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.980148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.980181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.980331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.980376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.980510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.980540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.980669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.980696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.980843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.980895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.981054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.981083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.981246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.981273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.981459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.981488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.981653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.981687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.981889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.981917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.982080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.982109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.982293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.982323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.982490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.982516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.982637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.982681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.982865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.982930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.983129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.983167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.983331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.983361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.983542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.983571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.983737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.983763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.983925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.983957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.984145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.984175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.984334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.984361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.481 qpair failed and we were unable to recover it. 00:34:53.481 [2024-07-15 07:03:40.984492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.481 [2024-07-15 07:03:40.984535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.984704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.984732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.984911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.984938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.985070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.985099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.985234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.985264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.985449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.985476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.985664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.985694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.985862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.985896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.986068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.986095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.986254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.986284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.986440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.986470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.986629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.986655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.986845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.986874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.987888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.987933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.988090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.988120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.988283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.988310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.988476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.988505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.988635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.988665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.988857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.988890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.989078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.989108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.989290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.989320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.989457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.989488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.989667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.989697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.989853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.989888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.990037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.990063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.990205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.990255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.990443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.990473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.990607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.990634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.990787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.990830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.991008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.482 [2024-07-15 07:03:40.991037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.482 qpair failed and we were unable to recover it. 00:34:53.482 [2024-07-15 07:03:40.991174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.991200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.991392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.991421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.991552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.991582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.991778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.991805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.991970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.992203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.992373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.992561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.992753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.992921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.992948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.993121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.993355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.993559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.993702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.993848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.993972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.994149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.994357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.994565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.994757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.994934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.994965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.995091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.995118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.995284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.995327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.995481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.995511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.995677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.995704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.995808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.995836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.996899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.996932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.997110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.997140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.997332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.997358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.997513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.997547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.997814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.997866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.998047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.998073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.998204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.998247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.998415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.998442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.998663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.998715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.998944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.483 [2024-07-15 07:03:40.998972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.483 qpair failed and we were unable to recover it. 00:34:53.483 [2024-07-15 07:03:40.999087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:40.999113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:40.999292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:40.999319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:40.999484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:40.999513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:40.999636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:40.999666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:40.999830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:40.999860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.000892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.000926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.001080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.001110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.001365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.001422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.001587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.001614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.001729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.001771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.001945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.001991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.002192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.002221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.002396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.002428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.002690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.002740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.002899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.002930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.003123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.003156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.003465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.003515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.003704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.003731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.003842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.003893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.004055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.004084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.004248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.004274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.004393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.004421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.004627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.004657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.004825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.004853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.005042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.005069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.005309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.005367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.005558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.005585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.005740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.005771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.005929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.005960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.006121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.006147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.006312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.006342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.006619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.006669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.006859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.006906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.007070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.007100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.007262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.007293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.484 [2024-07-15 07:03:41.007485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.484 [2024-07-15 07:03:41.007512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.484 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.007674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.007704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.007861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.007901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.008099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.008126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.008265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.008293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.008438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.008466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.008642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.008670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.008828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.008858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.009939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.009969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.010139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.010177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.010311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.010339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.010530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.010559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.010725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.010752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.010925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.010952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.011117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.011146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.011343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.011393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.011557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.011584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.011746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.011775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.011985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.012181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.012325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.012535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.012773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.012952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.012982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.013176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.013206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.013363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.013395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.013591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.013620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.013779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.013809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.013976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.014004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.014125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.014151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.014353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.014419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.014611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.014638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.014804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.014833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.015004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.015035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.015234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.015261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.485 qpair failed and we were unable to recover it. 00:34:53.485 [2024-07-15 07:03:41.015435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.485 [2024-07-15 07:03:41.015464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.015628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.015654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.015822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.015849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.015987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.016015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.016207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.016236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.016403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.016429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.016586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.016616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.016798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.016828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.016976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.017003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.017176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.017202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.017374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.017406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.017593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.017620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.017810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.017839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.017991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.018184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.018404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.018598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.018792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.018941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.018983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.019141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.019181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.019345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.019372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.019535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.019564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.019749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.019778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.019937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.019964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.020154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.020184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.020422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.020476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.020668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.020695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.020811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.020837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.021964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.021994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.022124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.022163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.022353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.022380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.022531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.022558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.486 [2024-07-15 07:03:41.022702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.486 [2024-07-15 07:03:41.022730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.486 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.022939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.022967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.023170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.023325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.023354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.023517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.023544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.023730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.023760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.023943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.023973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.024141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.024167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.024304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.024331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.024517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.024547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.024717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.024744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.024910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.024940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.025963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.025990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.026130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.026184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.026324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.026351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.026472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.026500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.026703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.026730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.026888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.026927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.027116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.027145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.027318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.027347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.027538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.027565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.027726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.027756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.027940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.027969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.028135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.487 [2024-07-15 07:03:41.028167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.487 qpair failed and we were unable to recover it. 00:34:53.487 [2024-07-15 07:03:41.028321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.028351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.028511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.028542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.028700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.028727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.028871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.028935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.029076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.029112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.029282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.029309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.029450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.029492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.029656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.029686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.029866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.029904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.030080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.030110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.030269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.030298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.030463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.030490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.030680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.030709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.030864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.030903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.031066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.031092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.031253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.031282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.031440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.031469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.031636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.031663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.031831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.031861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.032065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.032091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.032261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.032287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.032449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.032479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.488 [2024-07-15 07:03:41.032662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.488 [2024-07-15 07:03:41.032692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.488 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.032856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.032892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.033065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.033096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.033284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.033314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.033483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.033510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.033674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.033703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.033859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.033899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.034082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.034108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.034271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.034303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.034479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.034523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.034708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.034737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.034854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.034891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.035899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.035927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.036065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.036096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.036260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.036287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.036403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.771 [2024-07-15 07:03:41.036446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.771 qpair failed and we were unable to recover it. 00:34:53.771 [2024-07-15 07:03:41.036680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.036733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.036893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.036926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.037073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.037101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.037337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.037390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.037520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.037555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.037739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.037769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.037984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.038012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.038163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.038190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.038382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.038412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.038684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.038732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.038909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.038938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.039084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.039128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.039353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.039405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.039541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.039568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.039715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.039757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.039925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.039955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.040146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.040172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.040312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.040339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.040603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.040655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.040841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.040868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.040994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.041165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.041360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.041551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.041959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.041986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.042133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.042160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.042328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.042358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.042548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.042575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.042735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.042764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.042921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.042952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.043142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.043169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.043336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.043366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.043519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.043548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.043706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.043733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.043849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.043900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.044090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.044119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.044254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.044281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.044456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.044501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.044679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.772 [2024-07-15 07:03:41.044709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.772 qpair failed and we were unable to recover it. 00:34:53.772 [2024-07-15 07:03:41.044846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.044872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.045072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.045264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.045458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.045665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.045851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.045996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.046025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.046170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.046212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.046369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.046399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.046596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.046623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.046788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.046818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.046987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.047014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.047161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.047187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.047372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.047403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.047584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.047614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.047785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.047812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.047978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.048160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.048361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.048534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.048721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.048892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.048920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.049090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.049305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.049524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.049672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.049818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.049994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.050022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.050236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.050282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.050474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.050524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.050719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.050747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.050889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.050937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.051943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.051990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.052160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.052192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.773 [2024-07-15 07:03:41.052359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.773 [2024-07-15 07:03:41.052386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.773 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.052497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.052539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.052694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.052739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.052897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.052926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.053076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.053121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.053276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.053306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.053497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.053525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.053701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.053752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.053956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.054207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.054359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.054533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.054712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.054904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.054935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.055098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.055128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.055292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.055319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.055440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.055484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.055740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.055791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.056151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.056349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.056539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.056688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.056858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.056897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.057926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.057964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.058133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.058161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.058398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.058455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.058705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.058758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.058925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.058954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.059096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.059139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.059306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.059336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.059528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.059556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.059689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.059721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.059997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.060025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.060169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.060195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.774 [2024-07-15 07:03:41.060379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.774 [2024-07-15 07:03:41.060434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.774 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.060593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.060623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.060778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.060805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.060960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.060989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.061160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.061188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.061329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.061356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.061516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.061547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.061728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.061757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.061919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.061946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.062065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.062108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.062271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.062300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.062459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.062486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.062668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.062698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.062859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.062892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.063948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.063976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.064919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.064949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.065094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.065121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.065267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.065294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.065414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.065441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.065621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.065652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.065833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.065887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.066046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.066087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.066264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.775 [2024-07-15 07:03:41.066293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.775 qpair failed and we were unable to recover it. 00:34:53.775 [2024-07-15 07:03:41.066513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.066569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.066805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.066857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.067032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.067059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.067167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.067209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.067432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.067462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.067633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.067659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.067808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.067837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.068936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.068981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.069115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.069144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.069310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.069337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.069528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.069599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.069789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.069830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.069975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.070149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.070322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.070495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.070692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.070931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.070976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.071157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.071191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.071309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.071336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.071542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.071591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.071761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.071788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.071946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.071976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.072155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.072185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.072348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.072374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.072494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.072521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.072652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.072681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.072828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.072855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.073962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.073993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.074160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.074187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.776 [2024-07-15 07:03:41.074323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.776 [2024-07-15 07:03:41.074350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.776 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.074605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.074657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.074831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.074858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.075032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.075059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.075227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.075256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.075440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.075467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.075629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.075658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.075849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.075885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.076018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.076044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.076233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.076262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.076565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.076622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.076812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.076840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.076989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.077139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.077278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.077476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.077676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.077868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.077900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.078015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.078041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.078194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.078224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.078424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.078450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.078695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.078748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.078934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.078965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.079099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.079126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.079290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.079331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.079510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.079539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.079687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.079715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.079864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.079901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.080072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.080099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.080243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.080270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.080459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.080489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.080668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.080697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.080865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.080901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.081048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.081076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.081246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.081273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.081393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.081421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.081610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.081641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.081823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.081860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.082040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.082069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.082232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.082262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.082508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.777 [2024-07-15 07:03:41.082562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.777 qpair failed and we were unable to recover it. 00:34:53.777 [2024-07-15 07:03:41.082748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.082775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.082965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.082996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.083189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.083220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.083357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.083386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.083551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.083578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.083720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.083747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.083935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.083964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.084113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.084140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.084333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.084363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.084561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.084589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.084754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.084785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.084967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.084998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.085131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.085158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.085350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.085381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.085544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.085573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.085732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.085759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.085886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.085915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.086089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.086257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.086432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.086651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.086817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.086963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.087167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.087335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.087519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.087711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.087908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.087936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.088058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.088101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.088279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.088310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.088480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.088506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.088650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.088678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.088846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.088881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.089071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.089098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.089326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.089386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.089634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.089683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.089847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.089885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.090014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.090041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.090175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.090205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.090382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.778 [2024-07-15 07:03:41.090409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.778 qpair failed and we were unable to recover it. 00:34:53.778 [2024-07-15 07:03:41.090625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.090681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.090817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.090847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.090989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.091016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.091214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.091274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.091402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.091431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.091594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.091621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.091804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.091833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.092916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.092962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.093132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.093165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.093331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.093358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.093527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.093590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.093744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.093773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.093934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.093962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.094125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.094156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.094349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.094376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.094521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.094549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.094721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.094751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.094890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.094920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.095113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.095140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.095298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.095328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.095497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.095525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.095697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.095723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.095892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.095922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.096082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.096112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.096288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.096316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.096461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.096503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.096663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.096692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.096857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.096892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.097068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.097096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.097275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.097304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.097432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.097458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.097623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.097656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.097803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.097847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.098020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.098047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.098203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.098232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.779 [2024-07-15 07:03:41.098384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.779 [2024-07-15 07:03:41.098414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.779 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.098571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.098597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.098711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.098737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.098944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.098978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.099144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.099171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.099295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.099321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.099462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.099488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.099622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.099648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.099801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.099830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.100965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.100994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.101146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.101173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.101338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.101369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.101499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.101529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.101782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.101834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.102884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.102922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.103067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.103250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.103441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.103628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.103812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.103973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.104135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.104332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.104542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.104766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.780 [2024-07-15 07:03:41.104926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.780 [2024-07-15 07:03:41.104953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.780 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.105120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.105163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.105404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.105434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.105573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.105600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.105733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.105760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.105941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.105971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.106110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.106146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.106295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.106339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.106504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.106533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.106678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.106705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.106848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.106897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.107038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.107065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.107245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.107272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.107404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.107433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.107707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.107758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.107929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.107956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.108105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.108132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.108334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.108363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.108528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.108554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.108718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.108748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.108915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.108960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.109158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.109186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.109353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.109384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.109514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.109543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.109745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.109773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.109938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.109983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.110132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.110158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.110349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.110376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.110517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.110544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.110713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.110743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.110886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.110924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.111072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.111117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.111283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.111309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.111428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.111454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.111641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.111670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.111795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.111824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.112027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.112196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.112422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.112637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.781 [2024-07-15 07:03:41.112799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.781 qpair failed and we were unable to recover it. 00:34:53.781 [2024-07-15 07:03:41.112980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.113026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.113211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.113248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.113486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.113532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.113790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.113817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.113964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.113991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.114139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.114166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.114311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.114338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.114505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.114532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.114717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.114747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.114905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.114951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.115121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.115148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.115308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.115337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.115556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.115608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.115796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.115823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.116906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.116934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.117114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.117331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.117525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.117671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.117818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.117996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.118024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.118186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.118216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.118399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.118428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.118618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.118645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.118817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.118847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.118992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.119191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.119378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.119537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.119735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.119911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.119957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.120141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.120172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.120369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.120395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.120520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.120549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.120719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.120764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.782 [2024-07-15 07:03:41.120902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.782 [2024-07-15 07:03:41.120930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.782 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.121074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.121121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.121250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.121279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.121440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.121468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.121657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.121687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.121866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.121904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.122069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.122097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.122301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.122345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.122595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.122648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.122835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.122862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.123019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.123048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.123243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.123273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.123461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.123488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.123697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.123761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.123923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.123955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.124097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.124125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.124289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.124334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.124499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.124526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.124667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.124694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.124890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.124920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.125123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.125164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.125364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.125392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.125603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.125660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.125831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.125858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.126914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.126944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.127084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.127111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.127296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.127325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.127596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.127647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.127831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.127858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.128899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.128926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.129117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.129147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.783 [2024-07-15 07:03:41.129263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.783 [2024-07-15 07:03:41.129292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.783 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.129487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.129513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.129639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.129666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.129786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.129814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.129934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.129962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.130106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.130150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.130305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.130335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.130470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.130496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.130651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.130692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.130853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.130897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.131967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.131995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.132140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.132167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.132328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.132357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.132548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.132601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.132765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.132792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.132985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.133130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.133157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.133306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.133334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.133496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.133525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.133695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.133722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.133859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.133891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.134031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.134214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.134418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.134662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.134843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.134982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.135009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.135195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.135225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.135516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.135578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.135759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.135786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.135950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.784 [2024-07-15 07:03:41.135980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.784 qpair failed and we were unable to recover it. 00:34:53.784 [2024-07-15 07:03:41.136109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.136138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.136293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.136320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.136469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.136495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.136642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.136686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.136851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.136883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.137021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.137048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.137212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.137241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.137406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.137432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.137592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.137623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.137823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.137869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.138082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.138111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.138286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.138317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.138510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.138540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.138699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.138726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.138912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.138942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.139101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.139130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.139331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.139359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.139521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.139551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.139710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.139739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.139913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.139941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.140105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.140136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.140297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.140326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.140487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.140513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.140659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.140685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.140836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.140885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.141111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.141140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.141308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.141338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.141617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.141666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.141863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.141897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.142065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.142095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.142368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.142416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.142608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.142635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.142774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.142809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.143047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.143234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.143421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.143611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.143831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.143997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.144028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.144194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.785 [2024-07-15 07:03:41.144222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.785 qpair failed and we were unable to recover it. 00:34:53.785 [2024-07-15 07:03:41.144359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.144386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.144548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.144577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.144741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.144771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.144922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.144950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.145095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.145137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.145274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.145305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.145502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.145529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.145697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.145727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.145893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.145924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.146098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.146125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.146286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.146318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.146478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.146507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.146671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.146698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.146849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.146883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.147007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.147034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.147151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.147178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.147346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.147373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.147606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.147661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.147821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.147848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.148968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.148996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.149140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.149167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.149370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.149399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.149565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.149592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.149739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.149766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.149902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.149930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.150098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.150125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.150314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.150343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.150471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.150505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.150675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.150704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.150822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.150864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.151040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.151070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.151210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.151237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.151405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.151432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.151606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.151635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.151805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.151832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.786 [2024-07-15 07:03:41.152024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.786 [2024-07-15 07:03:41.152055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.786 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.152237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.152265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.152435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.152462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.152632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.152659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.152854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.152893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.153039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.153065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.153239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.153282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.153467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.153497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.153636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.153664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.153807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.153849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.154047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.154077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.154274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.154300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.154448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.154475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.154625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.154670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.154812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.154839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.155044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.155076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.155237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.155265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.155427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.155453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.155602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.155645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.155804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.155834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.156953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.156983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.157145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.157173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.157293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.157335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.157489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.157518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.157705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.157732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.157895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.157926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.158089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.158118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.158281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.158307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.158498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.158529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.158705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.158732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.158870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.158914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.159029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.159073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.159236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.159266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.159432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.159459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.159616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.159646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.159830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.787 [2024-07-15 07:03:41.159859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.787 qpair failed and we were unable to recover it. 00:34:53.787 [2024-07-15 07:03:41.160062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.160247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.160432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.160591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.160740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.160944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.160976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.161166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.161193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.161355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.161385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.161540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.161570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.161738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.161765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.161887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.161932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.162969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.162997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.163129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.163159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.163290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.163321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.163462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.163489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.163664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.163693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.163856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.163891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.164079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.164109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.164261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.164291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.164463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.164490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.164630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.164657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.164824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.164853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.165952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.788 [2024-07-15 07:03:41.165983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.788 qpair failed and we were unable to recover it. 00:34:53.788 [2024-07-15 07:03:41.166125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.166153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.166306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.166351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.166475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.166505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.166672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.166699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.166846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.166874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.167953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.167984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.168177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.168207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.168382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.168409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.168595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.168625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.168804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.168834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.168974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.169145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.169347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.169506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.169644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.169838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.169868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.170042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.170230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.170421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.170617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.170789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.170979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.171152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.171321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.171564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.171765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.171985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.172109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.172138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.172304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.172331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.172496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.172526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.172676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.172705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.172882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.172911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.173067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.173097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.173253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.173283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.173426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.173454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.789 qpair failed and we were unable to recover it. 00:34:53.789 [2024-07-15 07:03:41.173615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.789 [2024-07-15 07:03:41.173672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.173815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.173846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.173993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.174021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.174167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.174193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.174345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.174375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.174566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.174592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.174827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.174891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.175071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.175100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.175284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.175310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.175426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.175452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.175624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.175652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.175821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.175848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.176016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.176077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.176277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.176309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.176503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.176531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.176684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.176741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.176892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.176923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.177091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.177119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.177295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.177326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.177508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.177538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.177701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.177727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.177849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.177902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.178124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.178154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.178354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.178381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.178600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.178656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.178812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.178850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.179967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.179999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.180173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.180200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.180367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.180394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.180522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.180552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.180717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.180744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.180928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.180960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.181119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.181148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.181341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.181368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.181584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.181638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.181819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.790 [2024-07-15 07:03:41.181849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.790 qpair failed and we were unable to recover it. 00:34:53.790 [2024-07-15 07:03:41.182015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.182043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.182189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.182235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.182393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.182422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.182591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.182620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.182770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.182798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.182985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.183179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.183396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.183613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.183777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.183926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.183954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.184106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.184137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.184305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.184331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.184539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.184596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.184718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.184749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.184916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.184944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.185090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.185116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.185260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.185287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.185405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.185431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.185581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.185624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.185807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.185836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.186946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.186974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.187094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.187122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.187309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.187339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.187522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.187552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.187713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.187740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.187912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.187958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.188153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.188179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.188439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.188466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.188630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.188660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.188838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.188867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.189043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.189071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.189191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.189218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.189364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.189390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.189534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.189561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.791 [2024-07-15 07:03:41.189701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.791 [2024-07-15 07:03:41.189729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.791 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.189866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.189900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.190049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.190075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.190240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.190268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.190463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.190490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.190657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.190684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.190815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.190843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.191019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.191046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.191190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.191217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.191381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.191411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.191597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.191626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.191800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.191826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.192069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.192100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.192297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.192324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.192471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.192498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.192708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.192738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.192919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.192949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.193117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.193145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.193309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.193340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.193501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.193531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.193659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.193686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.193837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.193887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.194043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.194073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.194272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.194299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.194484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.194517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.194676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.194705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.194902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.194929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.195070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.195100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.195294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.195325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.195511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.195538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.195727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.195756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.195940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.195971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.196113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.196141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.196324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.196354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.196510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.196539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.196713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.196740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.196905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.196935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.197085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.197114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.197281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.197307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.197450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.197493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.197676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.792 [2024-07-15 07:03:41.197706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.792 qpair failed and we were unable to recover it. 00:34:53.792 [2024-07-15 07:03:41.197862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.197900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.198067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.198094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.198302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.198328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.198475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.198502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.198669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.198696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.198906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.198937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.199124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.199151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.199317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.199347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.199482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.199513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.199702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.199729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.199927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.199958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.200137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.200166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.200364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.200391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.200577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.200607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.200755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.200784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.200934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.200961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.201114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.201162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.201322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.201351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.201492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.201518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.201689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.201731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.201900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.201927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.202073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.202100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.202216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.202260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.202453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.202485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.202606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.202633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.202782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.202808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.203933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.203962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.204105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.204131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.204276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.204303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.793 [2024-07-15 07:03:41.204460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.793 [2024-07-15 07:03:41.204490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.793 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.204613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.204642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.204884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.204912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.205084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.205114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.205271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.205300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.205471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.205499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.205661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.205693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.205823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.205852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.206943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.206974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.207133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.207160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.207323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.207349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.207548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.207577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.207797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.207827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.208060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.208256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.208453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.208666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.208818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.208985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.209014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.209158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.209203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.209386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.209415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.209587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.209614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.209758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.209785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.209987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.210131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.210352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.210539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.210724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.210872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.210916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.211070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.211115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.211310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.211337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.211474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.211504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.211664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.211693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.211865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.211899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.212063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.212094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.212277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.794 [2024-07-15 07:03:41.212307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.794 qpair failed and we were unable to recover it. 00:34:53.794 [2024-07-15 07:03:41.212490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.212517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.212682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.212711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.212832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.212862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.213926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.213958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.214146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.214172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.214362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.214392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.214582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.214610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.214808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.214834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.215968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.215996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.216140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.216167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.216340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.216370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.216541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.216567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.216753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.216782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.216941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.216971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.217112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.217140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.217310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.217353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.217509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.217538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.217678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.217705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.217874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.217931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.218089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.218118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.218281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.218309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.218424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.218468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.218622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.218651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.218842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.218869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.219951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.219979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.220151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.795 [2024-07-15 07:03:41.220193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.795 qpair failed and we were unable to recover it. 00:34:53.795 [2024-07-15 07:03:41.220375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.220404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.220575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.220602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.220760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.220789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.220916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.220946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.221088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.221115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.221286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.221313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.221451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.221480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.221664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.221691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.221848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.221885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.222073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.222103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.222261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.222288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.222475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.222504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.222658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.222686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.222832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.222860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.223042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.223086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.223246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.223275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.223444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.223472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.223631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.223661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.223810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.223839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.224944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.224975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.225166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.225193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.225379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.225409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.225568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.225605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.225749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.225777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.225950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.225996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.226123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.226153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.226297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.226324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.226492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.226519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.226695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.226725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.226888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.226915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.227056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.227101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.227260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.227290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.227449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.227476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.227622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.227649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.227796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.227840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.796 [2024-07-15 07:03:41.228009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.796 [2024-07-15 07:03:41.228038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.796 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.228165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.228210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.228391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.228421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.228611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.228638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.228801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.228831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.228960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.228990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.229151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.229178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.229315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.229358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.229485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.229514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.229686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.229713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.229901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.229932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.230110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.230140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.230306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.230333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.230518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.230548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.230736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.230766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.230918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.230946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.231097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.231125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.231299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.231329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.231485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.231512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.231669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.231714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.231874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.231912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.232077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.232104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.232263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.232293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.232449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.232478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.232649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.232675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.232865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.232913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.233112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.233140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.233317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.233348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.233505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.233535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.233693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.233723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.233892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.233921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.234085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.234115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.234293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.234323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.234452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.234479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.797 [2024-07-15 07:03:41.234621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.797 [2024-07-15 07:03:41.234649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.797 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.234844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.234874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.235072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.235099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.235228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.235255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.235403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.235431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.235623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.235650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.235821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.235848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.236086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.236305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.236489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.236648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.236815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.236972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.237016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.237204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.237234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.237373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.237400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.237543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.237570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.237745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.237776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.237974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.238166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.238348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.238521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.238669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.238860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.238898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.239078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.239105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.239295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.239325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.239510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.239540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.239708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.239734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.239887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.239925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.240070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.240096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.240264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.240290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.240435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.240462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.240603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.240645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.240808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.240835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.241046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.241082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.241246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.241277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.241466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.241492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.241681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.241710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.241868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.241909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.242076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.242103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.242233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.242259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.242417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.242459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.798 qpair failed and we were unable to recover it. 00:34:53.798 [2024-07-15 07:03:41.242650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.798 [2024-07-15 07:03:41.242687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.242886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.242916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.243086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.243115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.243266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.243292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.243436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.243478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.243632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.243661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.243861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.243896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.244057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.244086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.244241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.244270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.244463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.244489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.244651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.244680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.244837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.244867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.245959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.245990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.246158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.246185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.246337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.246363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.246509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.246535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.246706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.246733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.246864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.246899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.247083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.247112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.247302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.247328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.247451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.247478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.247621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.247647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.247829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.247855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.248965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.248993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.249141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.249185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.249335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.249364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.249529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.249555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.249736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.249765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.249905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.249936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.250071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.250097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.250239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.799 [2024-07-15 07:03:41.250281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.799 qpair failed and we were unable to recover it. 00:34:53.799 [2024-07-15 07:03:41.250437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.250475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.250620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.250647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.250794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.250821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.250983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.251013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.251206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.251233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.251383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.251447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.251606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.251637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.251779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.251806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.251959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.252018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.252218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.252249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.252447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.252475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.252712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.252766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.252948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.252978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.253120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.253146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.253310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.253337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.253530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.253560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.253754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.253780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.253967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.253994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.254157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.254197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.254413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.254442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.254624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.254678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.254861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.254898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.255067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.255092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.255227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.255272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.255513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.255564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.255723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.255749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.255894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.255936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.256094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.256123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.256314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.256341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.256557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.256609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.256770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.256799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.256979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.257006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.257201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.257231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.257388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.257417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.257566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.257593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.257785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.257814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.257999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.258026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.258139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.258166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.258397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.258423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.258570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.258596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.800 [2024-07-15 07:03:41.259532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.800 [2024-07-15 07:03:41.259567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.800 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.259772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.259799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.259926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.259953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.260091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.260117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.260290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.260315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.260479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.260521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.260658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.260686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.260899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.261084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.261108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.261284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.261312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.261500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.261526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.261665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.261692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.261850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.261884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.262046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.262071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.262231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.262259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.262482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.262510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.262677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.262702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.262884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.262912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.263944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.263970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.264141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.264184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.264319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.264344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.264516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.264542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.264682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.264709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.264898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.264924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.265712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.265745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.265910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.265954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.266096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.266122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.266313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.266369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.266568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.266597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.266737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.801 [2024-07-15 07:03:41.266762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.801 qpair failed and we were unable to recover it. 00:34:53.801 [2024-07-15 07:03:41.266932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.266958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.267460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.267491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.267663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.267689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.267882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.267911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.268939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.268965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.269950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.269977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.270091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.270116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.270255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.270280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.270470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.270497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.270624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.270652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.270820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.270845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.271964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.271990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.272135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.272161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.272357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.272385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.272548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.272573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.272740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.272769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.272893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.272922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.273051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.273076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.273262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.273294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.273427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.273454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.273623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.273648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.273838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.273866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.274045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.274071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.274198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.274223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.274389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.802 [2024-07-15 07:03:41.274416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.802 qpair failed and we were unable to recover it. 00:34:53.802 [2024-07-15 07:03:41.274575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.274603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.274830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.274855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.274996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.275912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.275943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.276954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.276980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.277090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.277115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.277341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.277369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.277495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.277522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.277664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.277688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.277835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.277884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.278930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.278956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.279093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.279280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.279435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.279593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.279845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.279994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.280177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.280360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.280521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.280749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.280925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.280951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.281094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.281119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.281287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.281315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.803 qpair failed and we were unable to recover it. 00:34:53.803 [2024-07-15 07:03:41.281489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.803 [2024-07-15 07:03:41.281514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.281717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.281745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.281901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.281944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.282092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.282117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.282285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.282310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.282462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.282489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.282674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.282699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.282860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.282898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.283932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.283975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.284086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.284111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.284263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.284288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.284484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.284512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.284678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.284706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.284868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.284900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.285925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.285950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.286093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.286117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.286298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.286326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.286510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.286535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.286723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.286751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.286883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.286912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.287899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.287933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.288069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.288094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.288303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.288350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.288511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.288538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.288750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.288919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.288967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.804 qpair failed and we were unable to recover it. 00:34:53.804 [2024-07-15 07:03:41.289117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.804 [2024-07-15 07:03:41.289153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.289298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.289323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.289462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.289487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.289678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.289705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.289870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.289902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.290032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.290057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.290191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.290220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.290412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.290437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.290629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.290657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.290809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.290837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.291889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.291928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.292062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.292087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.292242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.292284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.292517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.292545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.292701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.292726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.292914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.292943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.293099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.293251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.293424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.293616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.293801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.293998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.294220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.294417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.294602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.294761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.294961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.294988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.295160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.295188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.295333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.295361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.295498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.295523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.295670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.295709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.295857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.295893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.296723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.296756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.296939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.296969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.297147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.297173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.297340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.297364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.805 qpair failed and we were unable to recover it. 00:34:53.805 [2024-07-15 07:03:41.297506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.805 [2024-07-15 07:03:41.297535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.297733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.297758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.297874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.297907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.298933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.298960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.299090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.299117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.299337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.299365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.299550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.299575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.299712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.299740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.299905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.299932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.300378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.300409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.300576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.300605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.300740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.300770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.300941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.300968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.301092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.301118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.301388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.301418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.301590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.301615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.301760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.301785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.301946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.301974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.302122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.302148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.302303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.302328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.302469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.302506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.302640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.302665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.302813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.302838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.303003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.303028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.303249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.303274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.303476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.303504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.303662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.303690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.303854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.303886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.304005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.304030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.304149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.806 [2024-07-15 07:03:41.304191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.806 qpair failed and we were unable to recover it. 00:34:53.806 [2024-07-15 07:03:41.304357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.304382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.304519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.304562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.304689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.304717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.304881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.304907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.305945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.305972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.306089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.306114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.306272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.306300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.306492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.306517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.306711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.306739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.306896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.306925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.307057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.307082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.307235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.307260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.307471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.307500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.307671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.307696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.307864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.307899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.308054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.308082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.308222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.308247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.308431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.308458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.308618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.308645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.308805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.308831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.309852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.309889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.310870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.310911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.311046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.311074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.311198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.311225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.311371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.311395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.311541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.311567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.807 qpair failed and we were unable to recover it. 00:34:53.807 [2024-07-15 07:03:41.311742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.807 [2024-07-15 07:03:41.311770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.311904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.311931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.312072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.312264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.312479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.312635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.312824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.312976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.313116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.313334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.313506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.313689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.313892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.313921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.314081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.314106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.314252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.314295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.314474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.314502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.314660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.314685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.314826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.314893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.315856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.315989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.316138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.316314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.316482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.316679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.316850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.316883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.317874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.317999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.318141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.318310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.318478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.318675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.808 qpair failed and we were unable to recover it. 00:34:53.808 [2024-07-15 07:03:41.318817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.808 [2024-07-15 07:03:41.318843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.318986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.319843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.319972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.320864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.320989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.321938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.321964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.322945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.322971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.323928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.323954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.324934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.324960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.325107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.325133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.325278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.325304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.325451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.809 [2024-07-15 07:03:41.325476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.809 qpair failed and we were unable to recover it. 00:34:53.809 [2024-07-15 07:03:41.325624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.325650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.325793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.325818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.325942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.325968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.326904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.326931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.327883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.327909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.328854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.328887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.329902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.329928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.330844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.330869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.331007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.331033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.810 [2024-07-15 07:03:41.331146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.810 [2024-07-15 07:03:41.331171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.810 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.331317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.331342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.331518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.331543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.331710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.331735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.331845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.331870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.331998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.332136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.332301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.332500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.332711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.332916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.332943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.333061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.333087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.333223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.333248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.333419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.333446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.333660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.333688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.333881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.333906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.334043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.334195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.334413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.334625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.334774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.334968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.335019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.335162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.335187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.335370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.335413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.335573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.335615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.335784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.335809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.335964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.336007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.336180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.336221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.336411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.336439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.336626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.336670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.336836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.336861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.337028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.337071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.337252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.337298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.337486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.337513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.337669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.337693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.337821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.337847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.338008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.338051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.338255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.338297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.338462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.338504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.338653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.338680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.338818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.338843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.811 qpair failed and we were unable to recover it. 00:34:53.811 [2024-07-15 07:03:41.339015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.811 [2024-07-15 07:03:41.339058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.339198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.339241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.339433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.339461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.339610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.339653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.339793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.339817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.339985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.340029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.340201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.340229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.340442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.340484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.340664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.340689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.340835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.340860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.341058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.341087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.341294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.341322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.341556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.341599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.341712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.341737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.341922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.341948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.342143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.342184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.342321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.342362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.342525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.342568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.342716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.342741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.342913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.342939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.343070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.343119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.343287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.343329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.343601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.343652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.343795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.343822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.343990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.344019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.344196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.344239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.344432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.344474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.344631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.344656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.344802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.344827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.344982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.345010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.345172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.345215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.345406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.345434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.345571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.345597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.345765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.345790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.345959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.346168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.346329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.346523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.346715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.346887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.346913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.812 [2024-07-15 07:03:41.347043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.812 [2024-07-15 07:03:41.347070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.812 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.347289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.347331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.347476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.347519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.347663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.347688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.347829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.347854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.348032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.348075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.348276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.348303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.348495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.348537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.348682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.348708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.348848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.348872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.349059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.349102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.349230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.349272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.349464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.349508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.349651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.349676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.349825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.349850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.350014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.350057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.350191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.350234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.350403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.350431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.350614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.350643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.350828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.350852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.351023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.351055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.351268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.351310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.351476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.351519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.351668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.351694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.351858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.351890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.352031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.352058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.352240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.352283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.352474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.352517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.352690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.352715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.352842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.352866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.353011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.353040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.353216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.353243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.353428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.353472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.353646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.353671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.353821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.353847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.354017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.354061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.354201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.354244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.354443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.354486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.354657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.354682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.354850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.354883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.355047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.355075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.355290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.355318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.355523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.813 [2024-07-15 07:03:41.355566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.813 qpair failed and we were unable to recover it. 00:34:53.813 [2024-07-15 07:03:41.355710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.355736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.355890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.355917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.356050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.356078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.356258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.356301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.356509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.356552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.356696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.356721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.356863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.356905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.357078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.357120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.357304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.357348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.357521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.357550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.358372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.358403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.358603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.358648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.358769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.358795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.358928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.358955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.359124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.359166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.359359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.359401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.359575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.359622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.359787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.359817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.359970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.360013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.360205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.360251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.360391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.360435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.360574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.360602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.360785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.360810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.360967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.361183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.361424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.361586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.361734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:53.814 [2024-07-15 07:03:41.361920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.814 [2024-07-15 07:03:41.361949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:53.814 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.362097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.362142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.362258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.362285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.362442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.362468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.362642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.362668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.362809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.362834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.363015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.363060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.363228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.363270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.099 [2024-07-15 07:03:41.363439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.099 [2024-07-15 07:03:41.363483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.099 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.363608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.363633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.363771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.363796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.363956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.364141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.364349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.364563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.364752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.364915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.364956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.365933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.365963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.366101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.366144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.366297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.366325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.366512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.366540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.366676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.366701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.366846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.366871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.367845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.367874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.368054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.368237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.368420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.368670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.368825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.368994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.369162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.369341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.369525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.369710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.369859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.369893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.370052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.370077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.370188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.370213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.370338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.370363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.370532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.370560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.100 qpair failed and we were unable to recover it. 00:34:54.100 [2024-07-15 07:03:41.370748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.100 [2024-07-15 07:03:41.370776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.370930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.370955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.371970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.371996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.372160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.372344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.372473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.372642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.372828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.372979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.373133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.373341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.373535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.373717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.373857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.373891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.374913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.374939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.375077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.375246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.375420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.375594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.375776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.375961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.376152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.376348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.376581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.376766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.376923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.376949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.377115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.377145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.377293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.377321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.377479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.377506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.377654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.377682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.377852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.377881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.101 qpair failed and we were unable to recover it. 00:34:54.101 [2024-07-15 07:03:41.378026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.101 [2024-07-15 07:03:41.378050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.378203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.378231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.378362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.378390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.378521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.378550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.378730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.378758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.378887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.378930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.379955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.379980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.380118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.380310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.380497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.380681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.380869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.380995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.381129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.381343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.381488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.381706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.381898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.381924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.382896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.382922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.383872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.383907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.384916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.102 [2024-07-15 07:03:41.384942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.102 qpair failed and we were unable to recover it. 00:34:54.102 [2024-07-15 07:03:41.385082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.385107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.385272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.385299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.385448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.385476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.385647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.385671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.385819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.385847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.386961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.386986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.387092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.387116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.387263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.387304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.387452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.387480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.387657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.387685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.387810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.387838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.388914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.388961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.389075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.389101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.389237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.389262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.389419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.389447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.389645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.389686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.389837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.389862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.390907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.390933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.391054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.391079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.391271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.391315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.391486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.391518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.391693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.391722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.391888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.391914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.392087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.392113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.392280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.392308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.103 qpair failed and we were unable to recover it. 00:34:54.103 [2024-07-15 07:03:41.392493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.103 [2024-07-15 07:03:41.392520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.392688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.392713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.392889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.392934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.393129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.393158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.393361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.393409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.393546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.393573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.393713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.393739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.393886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.393935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.394094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.394122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.394358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.394386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.394551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.394578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.394721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.394746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.394930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.394959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.395109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.395137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.395308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.395337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.395474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.395499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.395673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.395698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.395854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.395884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.396067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.396095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.396300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.396329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.396498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.396526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.396690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.396715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.396854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.396884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.397074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.397102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.397256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.397284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.397535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.397581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.397743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.397768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.397887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.397932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.398145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.398316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.398506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.398647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.398816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.104 [2024-07-15 07:03:41.398969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.104 [2024-07-15 07:03:41.399014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.104 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.399221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.399282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.399452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.399497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.399624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.399651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.399798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.399824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.399982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.400027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.400252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.400297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.400461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.400504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.400646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.400670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.400816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.400841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.400981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.401156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.401358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.401568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.401707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.401895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.401921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.402064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.402108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.402297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.402324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.402483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.402508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.402651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.402677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.402822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.402846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.403048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.403268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.403503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.403647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.403839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.403991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.404167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.404334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.404538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.404757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.404925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.404951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.405115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.405143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.405296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.405325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.405485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.405509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.405650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.405675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.405817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.405842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.406016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.406059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.406227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.406269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.406494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.406537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.406703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.406728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.105 qpair failed and we were unable to recover it. 00:34:54.105 [2024-07-15 07:03:41.406875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.105 [2024-07-15 07:03:41.406915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.407080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.407123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.407276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.407302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.407464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.407506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.407655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.407682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.407830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.407855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.408069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.408279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.408478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.408647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.408818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.408978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.409022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.409195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.409243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.409462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.409505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.409655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.409680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.409821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.409845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.410022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.410066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.410229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.410275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.410466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.410494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.410653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.410677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.410846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.410871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.411080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.411123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.411363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.411417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.411716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.411768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.411959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.412004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.412175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.412223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.412377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.412421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.412582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.412624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.412770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.412794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.412990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.413198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.413399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.413585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.413738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.413954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.413996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.414149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.414192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.414345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.414387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.414523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.414549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.414715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.414740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.414874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.414906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.415086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.106 [2024-07-15 07:03:41.415116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.106 qpair failed and we were unable to recover it. 00:34:54.106 [2024-07-15 07:03:41.415286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.415312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.415480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.415505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.415648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.415674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.415844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.415870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.416061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.416273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.416478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.416636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.416807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.416969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.417013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.417183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.417226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.417419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.417462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.417604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.417630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.417781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.417806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.417965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.418202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.418408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.418567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.418727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.418864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.418894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.419079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.419254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.419458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.419652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.419813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.419978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.420193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.420415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.420609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.420779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.420938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.420967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.421142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.421170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.421351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.421395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.421558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.421583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.421724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.421750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.421938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.421966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.422153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.422196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.422362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.422392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.422579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.422607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.422792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.422822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.107 [2024-07-15 07:03:41.422950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.107 [2024-07-15 07:03:41.422977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.107 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.423147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.423176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.423359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.423386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.423541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.423569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.423699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.423727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.423892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.423919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.424064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.424089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.424252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.424280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.424429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.424456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.424609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.424637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.424822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.424849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.425952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.425978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.426114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.426315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.426515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.426667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.426818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.426982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.427352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.427548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.427745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.427919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.427944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.428058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.428083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.428240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.428267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.428447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.428474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.428596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.428624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.428824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.428863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.429054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.429081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.429225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.429267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.108 qpair failed and we were unable to recover it. 00:34:54.108 [2024-07-15 07:03:41.429434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.108 [2024-07-15 07:03:41.429476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.429630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.429673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.429821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.429847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.430030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.430057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.430226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.430253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.430418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.430446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.430674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.430701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.430886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.430929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.431955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.431980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.432121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.432162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.432329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.432370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.432488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.432515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.432641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.432668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.432852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.432886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.433055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.433228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.433495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.433651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.433838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.433997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.434190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.434353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.434561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.434777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.434927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.434953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.435148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.435176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.435316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.435359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.435528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.435571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.435713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.435738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.435912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.435938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.436085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.436130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.436335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.436365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.436521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.436549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.436702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.436730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.436866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.436897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.437043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.437068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.109 qpair failed and we were unable to recover it. 00:34:54.109 [2024-07-15 07:03:41.437262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.109 [2024-07-15 07:03:41.437289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.437504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.437531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.437688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.437716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.437854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.437884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.437994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.438210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.438369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.438536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.438699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.438884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.438912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.439099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.439124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.439283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.439310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.439470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.439499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.439629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.439672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.439854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.439890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.440078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.440270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.440428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.440616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.440841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.440997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.441141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.441313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.441472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.441656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.441863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.441894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.442031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.442055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.442243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.442271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.442427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.442454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.442611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.442639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.442821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.442848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.443013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.443038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.443207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.443235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.443430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.443455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.443725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.443753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.443902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.443944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.444112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.444323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.444478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.444661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.444818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.110 [2024-07-15 07:03:41.444990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.110 [2024-07-15 07:03:41.445015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.110 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.445162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.445187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.445340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.445368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.445543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.445698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.445727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.445905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.445931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.446042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.446067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.446259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.446287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.446444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.446472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.446648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.446676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.446817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.446841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.447955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.447982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.448120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.448145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.448333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.448361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.448540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.448567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.448722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.448750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.448938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.448963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.449107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.449132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.449331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.449355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.449481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.449523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.449744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.449772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.449905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.449930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.450966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.450991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.451155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.451320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.451480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.451644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.451852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.451995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.452020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.452127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.452168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.452335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.452360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.452552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.111 [2024-07-15 07:03:41.452580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.111 qpair failed and we were unable to recover it. 00:34:54.111 [2024-07-15 07:03:41.452695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.452722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.452856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.452899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.453059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.453084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.453207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.453232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.453399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.453425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.453588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.453616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.453809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.453837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.454033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.454207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.454370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.454564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.454811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.454977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.455003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.455151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.455194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.455353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.455381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.455590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.455618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.455743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.455775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.455977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.456131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.456320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.456537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.456688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.456871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.456903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.457861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.457894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.458925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.458951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.112 qpair failed and we were unable to recover it. 00:34:54.112 [2024-07-15 07:03:41.459126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.112 [2024-07-15 07:03:41.459150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.459327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.459354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.459510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.459538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.459698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.459725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.459875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.459904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.460930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.460959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.461110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.461136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.461312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.461336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.461452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.461494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.461619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.461647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.461809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.461836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.462958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.462984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.463130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.463171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.463291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.463319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.463506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.463531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.463644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.463687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.463860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.463893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.464068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.464093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.464270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.464295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.464482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.464509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.464688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.464716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.464900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.464929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.465083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.465108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.465249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.465290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.465468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.465496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.465681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.465709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.465880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.465906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.466059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.466086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.466214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.466242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.466361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.466389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.466556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.466582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.113 qpair failed and we were unable to recover it. 00:34:54.113 [2024-07-15 07:03:41.466729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.113 [2024-07-15 07:03:41.466754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.466896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.466921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.467091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.467119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.467310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.467334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.467479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.467504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.467652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.467677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.467825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.467853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.468926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.468954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.469164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.469314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.469510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.469666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.469820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.469986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.470169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.470314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.470476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.470677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.470872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.470904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.471098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.471280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.471462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.471649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.471828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.471981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.472170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.472394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.472530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.472730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.472942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.472971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.473134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.473162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.473304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.473347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.473469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.473497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.473677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.473704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.473853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.473886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.474050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.474075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.114 qpair failed and we were unable to recover it. 00:34:54.114 [2024-07-15 07:03:41.474219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.114 [2024-07-15 07:03:41.474244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.474419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.474446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.474638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.474662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.474794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.474821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.474975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.475135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.475328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.475460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.475658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.475871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.475904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.476962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.476988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.477962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.477992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.478920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.478961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.479140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.479319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.479509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.479658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.479804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.479973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.480192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.480383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.480535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.480749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.480935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.480961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.481074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.481099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.481229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.481257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.115 qpair failed and we were unable to recover it. 00:34:54.115 [2024-07-15 07:03:41.481406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.115 [2024-07-15 07:03:41.481434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.481570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.481595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.481805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.481943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.481971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.482956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.482982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.483094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.483119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.483257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.483285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.483440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.483468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.483658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.483683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.483815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.483842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.484863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.484897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.485968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.485993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.486140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.486299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.486454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.486664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.486828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.486991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.487185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.487363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.487522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.487714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.487886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.487911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.488082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.116 [2024-07-15 07:03:41.488109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.116 qpair failed and we were unable to recover it. 00:34:54.116 [2024-07-15 07:03:41.488226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.488253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.488380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.488408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.488574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.488599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.488736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.488760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.488905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.488934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.489134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.489159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.489328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.489353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.489517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.489545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.489693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.489718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.489866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.489900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.490947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.490990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.491177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.491204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.491364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.491392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.491529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.491554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.491691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.491716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.491888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.491918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.492102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.492130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.492292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.492316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.492476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.492505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.492660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.492688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.492856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.492889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.493870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.493916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.494042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.494069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.494203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.494232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.494385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.494410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.494551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.494577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.117 [2024-07-15 07:03:41.494697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.117 [2024-07-15 07:03:41.494726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.117 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.494868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.494899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.495890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.495932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.496914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.496942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.497912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.497954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.498084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.498111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.498288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.498316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.498502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.498527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.498684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.498712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.498831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.498858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.499861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.499893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.500066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.500108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.500262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.500289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.500480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.500637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.500662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.500849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.500882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.501053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.501078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.501266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.501294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.501455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.501479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.501613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.501655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.501822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.501850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.118 qpair failed and we were unable to recover it. 00:34:54.118 [2024-07-15 07:03:41.502055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.118 [2024-07-15 07:03:41.502081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.502221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.502246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.502438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.502465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.502615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.502643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.502826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.502854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.503038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.503063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.503246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.503273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.503407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.503435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.503592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.503621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.503830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.503858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.504046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.504072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.504218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.504266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.504451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.504478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.504634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.504658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.504816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.504844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.505892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.505921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.506139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.506287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.506481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.506712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.506841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.506993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.507189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.507345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.507509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.507676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.507900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.507925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.508874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.508906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.509069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.509096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.509248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.509275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.509428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.509455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.119 qpair failed and we were unable to recover it. 00:34:54.119 [2024-07-15 07:03:41.509647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.119 [2024-07-15 07:03:41.509672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.509835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.509862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.510075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.510266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.510449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.510662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.510821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.510978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.511148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.511322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.511508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.511670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.511830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.511855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.512958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.512984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.513128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.513168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.513332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.513356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.513493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.513536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.513687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.513714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.513899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.513927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.514086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.514112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.514223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.514266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.514426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.514454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.514617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.514645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.514798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.514823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.515829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.515854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.120 [2024-07-15 07:03:41.516920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.120 [2024-07-15 07:03:41.516949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.120 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.517959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.517988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.518177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.518205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.518361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.518385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.518580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.518607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.518731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.518758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.518947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.518975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.519167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.519192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.519325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.519354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.519509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.519537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.519697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.519724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.519859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.519889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.520892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.520920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.521102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.521288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.521469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.521648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.521811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.521980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.522193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.522399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.522612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.522772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.522917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.522943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.121 [2024-07-15 07:03:41.523113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.121 [2024-07-15 07:03:41.523138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.121 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.523280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.523307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.523495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.523520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.523705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.523733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.523888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.523916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.524076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.524104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.524268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.524293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.524430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.524471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.524661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.524688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.524845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.524872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.525963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.525990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.526158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.526186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.526341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.526369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.526534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.526558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.526696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.526721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.526900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.526928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.527149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.527334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.527513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.527661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.527823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.527990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.528171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.528321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.528510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.528689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.528901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.528929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.529087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.529115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.529276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.529304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.529466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.529495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.529650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.529678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.529817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.529844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.530016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.530041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.530185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.530209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.530323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.530363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.530549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.530575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.122 [2024-07-15 07:03:41.530718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.122 [2024-07-15 07:03:41.530742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.122 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.530921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.530947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.531132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.531160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.531281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.531308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.531493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.531521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.531694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.531719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.531884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.531912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.532966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.532992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.533157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.533310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.533510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.533677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.533869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.533984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.534127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.534323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.534546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.534692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.534887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.534915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.535949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.535975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.536139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.536182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.536360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.536388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.536542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.536570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.536762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.536786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.536928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.536957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.537118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.537146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.123 qpair failed and we were unable to recover it. 00:34:54.123 [2024-07-15 07:03:41.537302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.123 [2024-07-15 07:03:41.537329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.537519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.537544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.537705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.537732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.537883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.537911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.538123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.538281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.538413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.538621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.538815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.538978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.539964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.539989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.540159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.540186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.540318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.540343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.540482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.540506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.540670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.540698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.540850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.540883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.541901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.541926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.542091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.542280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.542489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.542640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.542827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.542991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.543191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.543381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.543543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.543696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.543922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.543948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.544103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.544131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.544258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.544286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.544446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.124 [2024-07-15 07:03:41.544474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.124 qpair failed and we were unable to recover it. 00:34:54.124 [2024-07-15 07:03:41.544603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.544628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.544771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.544795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.544979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.545191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.545375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.545520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.545699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.545894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.545922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.546096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.546277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.546467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.546660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.546837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.546996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.547153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.547338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.547508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.547705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.547889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.547918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.548972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.548998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.549962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.549992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.550181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.125 [2024-07-15 07:03:41.550209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.125 qpair failed and we were unable to recover it. 00:34:54.125 [2024-07-15 07:03:41.550337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.550361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.550504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.550529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.550678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.550705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.550828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.550855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.551021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.551046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.551155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.551181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.551377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.551410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.551594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.551621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.551801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.551828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.552965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.552993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.553156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.553184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.553350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.553375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.553516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.553559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.553730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.553754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.553927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.553953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.554130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.554155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.554292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.554319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.554472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.554500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.554698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.554723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.554863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.554893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.555903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.555932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.556063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.556091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.556248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.556272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.556407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.556452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.556602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.556630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.556784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.556812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.557005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.557031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.557171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.557198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.557390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.126 [2024-07-15 07:03:41.557418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.126 qpair failed and we were unable to recover it. 00:34:54.126 [2024-07-15 07:03:41.557577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.557605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.557733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.557758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.557906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.557932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.558096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.558124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.558308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.558336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.558494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.558519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.558704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.558731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.558885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.558913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.559042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.559069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.559261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.559285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.559476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.559503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.559660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.559688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.559820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.559847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.560886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.560911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.561099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.561284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.561466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.561646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.561835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.561984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.562146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.562325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.562490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.562704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.562927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.562953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.563100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.563125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.563261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.563289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.563472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.563500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.563642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.563667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.563772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.127 [2024-07-15 07:03:41.563797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.127 qpair failed and we were unable to recover it. 00:34:54.127 [2024-07-15 07:03:41.564008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.564201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.564370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.564514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.564673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.564832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.564860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.565952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.565977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.566115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.566140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.566297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.566325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.566484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.566509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.566659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.566687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.566808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.566835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.567941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.567967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.568918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.568946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.569102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.569130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.569285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.569309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.569456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.569498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.569658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.569686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.569834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.569862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.570867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.570900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.571053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.128 [2024-07-15 07:03:41.571078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.128 qpair failed and we were unable to recover it. 00:34:54.128 [2024-07-15 07:03:41.571231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.571272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.571409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.571436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.571577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.571602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.571767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.571794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.571954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.571982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.572112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.572137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.572309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.572352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.572504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.572531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.572698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.572725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.572895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.572920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.573918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.573946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.574944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.574970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.575089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.575129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.575292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.575319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.575479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.575507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.575649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.575674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.575820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.575845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.576888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.576916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.577099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.577124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.577278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.577305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.129 [2024-07-15 07:03:41.577462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.129 [2024-07-15 07:03:41.577490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.129 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.577639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.577666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.577796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.577821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.577963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.577989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.578138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.578167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.578322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.578355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.578514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.578539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.578682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.578707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.578849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.578897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.579947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.579973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.580960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.580989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.581171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.581198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.581362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.581388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.581573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.581601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.581721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.581749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.581915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.581943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.582904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.582930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.583080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.583108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.583268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.583296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.583424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.130 [2024-07-15 07:03:41.583449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.130 qpair failed and we were unable to recover it. 00:34:54.130 [2024-07-15 07:03:41.583573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.583598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.583776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.583801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.583949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.583975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.584173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.584198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.584355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.584383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.584502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.584530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.584691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.584719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.584886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.584911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.585069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.585097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.585255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.585283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.585470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.585498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.585666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.585691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.585795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.585820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.586915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.586944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.587940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.587966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.588137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.588164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.588351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.588379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.588539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.588563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.588667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.588692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.588868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.588901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.589930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.589955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.590129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.590321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.590480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.590658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.131 [2024-07-15 07:03:41.590801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.131 qpair failed and we were unable to recover it. 00:34:54.131 [2024-07-15 07:03:41.590996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.591211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.591402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.591573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.591742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.591902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.591930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.592123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.592148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.592311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.592338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.592493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.592520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.592681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.592709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.592866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.592897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.593039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.593082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.593235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.593263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.593456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.593484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.593647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.593672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.593861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.593894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.594944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.594972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.595159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.595291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.595464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.595656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.595841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.595983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.596179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.596327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.596512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.596653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.596825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.596852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.597075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.597254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.597421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.597589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.597761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.597974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.598000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.598140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.598166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.598315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.132 [2024-07-15 07:03:41.598340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.132 qpair failed and we were unable to recover it. 00:34:54.132 [2024-07-15 07:03:41.598485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.598510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.598651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.598676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.598817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.598842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.598964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.598990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.599931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.599958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.600942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.600968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.601964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.601991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.602963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.602989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.603107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.603132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.603247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.603272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.603441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.603466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.603614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.603639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.603799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.603827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.604020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.604049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.604194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.604221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.604413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.604442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.604631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.604658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.604849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.604883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.605074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.605102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.605365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.605417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.133 qpair failed and we were unable to recover it. 00:34:54.133 [2024-07-15 07:03:41.605602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.133 [2024-07-15 07:03:41.605630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.605843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.605871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.606059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.606087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.606264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.606294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.606475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.606503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.606692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.606721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.606935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.606961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.607971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.607997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.608142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.608168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.608308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.608333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.608448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.608473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.608612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.608637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.608822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.608850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.609947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.609973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.610949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.610976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.611091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.611117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.611234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.611259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.611400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.611426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.611565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.134 [2024-07-15 07:03:41.611591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.134 qpair failed and we were unable to recover it. 00:34:54.134 [2024-07-15 07:03:41.611738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.611763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.611926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.611969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.612951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.612977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.613920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.613945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.614137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.614329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.614491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.614666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.614835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.614995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.615167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.615330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.615494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.615691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.615890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.615934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.616968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.616994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.617157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.617296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.617434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.617569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.617754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.617996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.618022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.618192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.618217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.618328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.618353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.618519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.618544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.135 qpair failed and we were unable to recover it. 00:34:54.135 [2024-07-15 07:03:41.618685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.135 [2024-07-15 07:03:41.618710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.618830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.618855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.618995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.619193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.619335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.619501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.619641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.619859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.619895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.620917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.620943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.621111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.621136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.621312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.621337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.621473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.621502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.621620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.621645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.621844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.621873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.622069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.622243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.622434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.622611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.622826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.622989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.623938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.623964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.624955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.624981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.625150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.625289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.625314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.625486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.625512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.625677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.136 [2024-07-15 07:03:41.625702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.136 qpair failed and we were unable to recover it. 00:34:54.136 [2024-07-15 07:03:41.625845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.625870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.625991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.626826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.626985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.627155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.627321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.627517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.627695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.627833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.627858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.628919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.628945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.629892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.629917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.630874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.630923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.631944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.631970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.632117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.632141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.632288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.632313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.632459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.632484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.137 qpair failed and we were unable to recover it. 00:34:54.137 [2024-07-15 07:03:41.632624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.137 [2024-07-15 07:03:41.632649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.632815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.632844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.633965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.633992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.634138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.634163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.634354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.634381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.634553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.634595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.634752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.634777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.634919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.634945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.635100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.635127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.635302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.635345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.635511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.635562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.635710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.635736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.635886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.635911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.636060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.636104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.636275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.636318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.636477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.636521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.636667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.636692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.636838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.636864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.637034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.637062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.637269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.637297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.637474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.637517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.637691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.637716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.637887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.637913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.638079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.638107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.638297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.638340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.638507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.638535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.638661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.638686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.638824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.638850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.639057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.639218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.639436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.639661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.639830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.639978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.640022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.640158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.640201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.640393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.138 [2024-07-15 07:03:41.640421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.138 qpair failed and we were unable to recover it. 00:34:54.138 [2024-07-15 07:03:41.640581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.640606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.640730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.640760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.640950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.640980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.641179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.641208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.641363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.641393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.641557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.641583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.641700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.641725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.641837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.641861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.642066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.642094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.642272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.642300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.642495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.642523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.642686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.642711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.642847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.642873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.643084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.643112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.643282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.643315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.643507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.643565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.643700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.643725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.643866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.643899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.644859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.644891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.645030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.645058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.645301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.645328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.645500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.645525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.645664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.645690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.645838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.645863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.646066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.646294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.646485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.646650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.646843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.139 [2024-07-15 07:03:41.646988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.139 [2024-07-15 07:03:41.647017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.139 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.647232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.647260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.647462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.647490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.647657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.647682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.647823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.647848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.648907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.648943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.649110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.649156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.649347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.649375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.649559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.649584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.649731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.649756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.649948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.649991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.650159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.650202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.650391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.650419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.650548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.650573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.650720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.650745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.650890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.650920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.651094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.651137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.651306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.651349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.651530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.651555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.651721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.651746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.651866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.651898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.652063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.652105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.652313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.652356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.652492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.652536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.652660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.652685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.652808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.652833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.653036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.653248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.653472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.653623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.653818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.653979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.654229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.654465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.654631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.654775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.140 qpair failed and we were unable to recover it. 00:34:54.140 [2024-07-15 07:03:41.654955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.140 [2024-07-15 07:03:41.654999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.655969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.655999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.656116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.656142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.656286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.656314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.656503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.656528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.656699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.656725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.656868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.656901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.657036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.657064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.657258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.657288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.657499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.657527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.657663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.657688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.657833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.657858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.658053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.658081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.658305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.658334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.658477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.658512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.658696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.658721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.658833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.658858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.659074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.659121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.659286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.659328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.659485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.659529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.659698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.659724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.659865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.659900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.660037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.660065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.660240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.660282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.660470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.660498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.660681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.660706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.660853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.660885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.661044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.661087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.661256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.661299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.661493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.661535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.661678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.661703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.661842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.661868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.662040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.662083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.662249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.662291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.662481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.662524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.662663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.662688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.662823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.662849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.663032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.663075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.141 qpair failed and we were unable to recover it. 00:34:54.141 [2024-07-15 07:03:41.663228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.141 [2024-07-15 07:03:41.663270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.663461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.663489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.663694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.663736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.663977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.664034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.664229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.664257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.664422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.664479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.664673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.664702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.664905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.664931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.665091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.665119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.665305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.665333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.665538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.665563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.665709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.665734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.665886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.665932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.666121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.666149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.666333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.666361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.666517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.666547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.666732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.666761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.667001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.667054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.667249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.667277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.667554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.667583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.667748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.667774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.667950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.667980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.668126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.668155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.668341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.668369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.668533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.668558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.668703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.668728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.668886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.668932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.669149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.669178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.669347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.669376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.669504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.669530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.669680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.669705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.669851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.669881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.670079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.670107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.670435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.670495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.670655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.670680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.670820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.670845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.670964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.670990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.671156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.671181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.671326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.671351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.671474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.671499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.671643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.671669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.671817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.671842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.672016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.672044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.672250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.672297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.142 qpair failed and we were unable to recover it. 00:34:54.142 [2024-07-15 07:03:41.672494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.142 [2024-07-15 07:03:41.672537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.672704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.672729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.672844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.672869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.673049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.673092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.673282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.673325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.673484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.673526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.673664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.673690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.673863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.673893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.674088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.674131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.674290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.674333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.674503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.674545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.674716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.674741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.674896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.674922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.675123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.675166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.675304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.675348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.675543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.675585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.675726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.675751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.675949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.675993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.676127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.676300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.676469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.676660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.676818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.676988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.677033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.677198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.677241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.677412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.677454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.677599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.677625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.677773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.677799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.677966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.678011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.678200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.678242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.678408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.678451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.678596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.678621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.678760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.678784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.678973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.679016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.679214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.679257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.679398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.679422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.679544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.143 [2024-07-15 07:03:41.679569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.143 qpair failed and we were unable to recover it. 00:34:54.143 [2024-07-15 07:03:41.679711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.679736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.679901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.679927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.680166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.680345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.680509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.680655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.680788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.680979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.681157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.681355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.681528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.681721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.681912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.681937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.682076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.682120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.682284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.682327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.682486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.682532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.682686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.682712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.682854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.682885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.683100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.683278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.683482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.683676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.683819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.683984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.684193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.684432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.684643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.684781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.684941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.684984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.144 qpair failed and we were unable to recover it. 00:34:54.144 [2024-07-15 07:03:41.685128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.144 [2024-07-15 07:03:41.685172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.685344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.685372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.685550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.685593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.685711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.685738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.685859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.685890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.686085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.686313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.686509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.686668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.686815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.686978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.687021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.145 [2024-07-15 07:03:41.687184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.145 [2024-07-15 07:03:41.687226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.145 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.687423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.687452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.687639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.687669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.687814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.687839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.688015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.688059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.688261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.688312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.688488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.688532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.688658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.688683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.688808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.688835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.689913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.689939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.690078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.690104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.690243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.690285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.690452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.690477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.428 [2024-07-15 07:03:41.690631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.428 [2024-07-15 07:03:41.690656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.428 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.690770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.690794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.690999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.691234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.691278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.691451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.691476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.691624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.691650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.691794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.691819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.691978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.692167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.692400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.692568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.692735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.692953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.692984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.693166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.693194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.693337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.693362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.693495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.693523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.693677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.693705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.693840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.693869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.694885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.694911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.695055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.695085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.695279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.695307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.695456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.695484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.695639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.695667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.695816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.695844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.696953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.696979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.697149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.697174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.697343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.697371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.697522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.697550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.697721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.697749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.697916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.697941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.429 [2024-07-15 07:03:41.698093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.429 [2024-07-15 07:03:41.698118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.429 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.698287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.698329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.698454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.698482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.698703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.698730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.698899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.698925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.699079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.699104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.699244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.699271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.699455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.699483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.699617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.699645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.699836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.699861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.700911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.700937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.701938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.701963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.702108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.702133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.702307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.702332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.702486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.702513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.702647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.702675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.702908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.702949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.703091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.703116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.703258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.703299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.703476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.703503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.703631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.703660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.703846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.703871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.704020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.704045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.704229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.704256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.704426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.704453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.704633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.704660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.704784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.704812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.705004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.705029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.705194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.705222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.705369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.705394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.705533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.705573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.430 [2024-07-15 07:03:41.705731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.430 [2024-07-15 07:03:41.705758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.430 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.705893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.705936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.706081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.706110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.706296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.706324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.706554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.706581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.706758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.706783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.706955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.706981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.707148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.707194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.707383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.707408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.707546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.707570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.707760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.707785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.708928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.708957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.709111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.709140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.709364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.709389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.709537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.709562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.709696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.709721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.709893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.709922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.710893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.710919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.711112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.711276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.711490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.711640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.711796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.711988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.712186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.712366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.712514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.712695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.712886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.712934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.713054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.713080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.713286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.713311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.431 qpair failed and we were unable to recover it. 00:34:54.431 [2024-07-15 07:03:41.713421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.431 [2024-07-15 07:03:41.713446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.713609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.713633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.713821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.713848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.714970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.714998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.715154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.715179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.715369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.715397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.715519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.715547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.715704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.715731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.715899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.715925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.716091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.716119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.716269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.716296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.716426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.716453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.716620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.716645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.716780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.716806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.717850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.717997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.718180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.718386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.718565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.718730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.718928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.718971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.719126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.719154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.719308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.719336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.432 [2024-07-15 07:03:41.719502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.432 [2024-07-15 07:03:41.719527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.432 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.719664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.719707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.719865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.719898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.720829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.720857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.721971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.721997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.722137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.722280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.722443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.722650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.722807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.722978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.723141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.723299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.723491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.723686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.723954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.723981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.724918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.724944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.725883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.725909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.726077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.726102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.726219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.726244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.726363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.433 [2024-07-15 07:03:41.726389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.433 qpair failed and we were unable to recover it. 00:34:54.433 [2024-07-15 07:03:41.726565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.726590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.726754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.726782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.726947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.726974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.727910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.727936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.728971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.728997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.729929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.729955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.730950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.730976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.731873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.731904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.732940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.732966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.733083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.733108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.733246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.434 [2024-07-15 07:03:41.733272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.434 qpair failed and we were unable to recover it. 00:34:54.434 [2024-07-15 07:03:41.733412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.733438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.733583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.733608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.733778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.733803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.733998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.734924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.734950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.735886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.735912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.736906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.736932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.737872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.737902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.738938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.738964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.739128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.739154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.739321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.739345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.739486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.739511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.739657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.739682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.739846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.739871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.435 [2024-07-15 07:03:41.740025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.435 [2024-07-15 07:03:41.740051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.435 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.740166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.740192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.740330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.740354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.740494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.740522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.740639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.740679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.740852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.740884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.741058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.741251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.741443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.741604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.741768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.741991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.742867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.742901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.743952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.743978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.744928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.744954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.745131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.745296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.745463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.745624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.745820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.745988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.436 qpair failed and we were unable to recover it. 00:34:54.436 [2024-07-15 07:03:41.746972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.436 [2024-07-15 07:03:41.746997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.747118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.747143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.747284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.747314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.747485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.747511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.747651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.747677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.747855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.747891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.748961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.748987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.749928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.749954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.750863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.750894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.751946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.751984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.752109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.752136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.752256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.752282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.752451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.752494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.752661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.752691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.437 qpair failed and we were unable to recover it. 00:34:54.437 [2024-07-15 07:03:41.752846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.437 [2024-07-15 07:03:41.752871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.753005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.753032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.753223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.753267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.753430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.753473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.753664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.753693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.753854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.753888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.754034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.754059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.754223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.754266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.754456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.754504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.754700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.754728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.754891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.754917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.755087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.755135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.755291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.755334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.755502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.755544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.755683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.755708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.755861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.755892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.756048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.756244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.756443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.756657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.756828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.756997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.757025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.757181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.757209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.757374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.757417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.757579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.757621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.757762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.757787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.757975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.758132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.758328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.758494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.758689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.758859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.758892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.759958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.438 [2024-07-15 07:03:41.759986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.438 qpair failed and we were unable to recover it. 00:34:54.438 [2024-07-15 07:03:41.760210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.760265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.760614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.760665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.760831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.760857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.760999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.761024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.761164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.761192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.761470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.761526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.761687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.761713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.761884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.761925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.762116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.762144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.762328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.762356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.762584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.762614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.762733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.762758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.762897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.762940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.763155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.763183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.763362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.763391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.763576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.763601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.763771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.763797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.763995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.764199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.764350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.764517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.764651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.764823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.764849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.765047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.765076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.765254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.765323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.765677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.765728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.765914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.765939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.766108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.766303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.766493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.766662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.766837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.766988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.767017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.767236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.767290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.767518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.767547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.767701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.767727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.767872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.439 [2024-07-15 07:03:41.767922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.439 qpair failed and we were unable to recover it. 00:34:54.439 [2024-07-15 07:03:41.768157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.768204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.768364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.768408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.768568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.768611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.768735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.768760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.768873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.768903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.769066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.769110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.769273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.769315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.769477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.769519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.769685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.769710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.769851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.769882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.770925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.770954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.771116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.771311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.771488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.771660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.771828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.771992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.772174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.772387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.772575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.772742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.772953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.772996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.773172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.773198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.773360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.773403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.773545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.773571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.773709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.773735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.773885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.773911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.774084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.774113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.774305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.774332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.774473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.774498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.774640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.440 [2024-07-15 07:03:41.774665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.440 qpair failed and we were unable to recover it. 00:34:54.440 [2024-07-15 07:03:41.774812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.774839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.775037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.775080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.775250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.775294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.775489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.775517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.775664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.775693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.775867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.775898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.776077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.776105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.776301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.776352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.776508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.776548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.776697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.776721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.776864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.776922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.777090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.777116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.777363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.777412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.777542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.777568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.777714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.777740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.777855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.777888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.778118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.778147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.778333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.778366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.778539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.778568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.778731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.778757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.778945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.778974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.779195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.779223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.779395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.779423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.779582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.779607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.779720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.779746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.779929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.779959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.780151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.780179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.780334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.780363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.780529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.780555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.780697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.780723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.780842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.780868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.781049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.781077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.781287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.781342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.441 qpair failed and we were unable to recover it. 00:34:54.441 [2024-07-15 07:03:41.781521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.441 [2024-07-15 07:03:41.781550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.781708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.781733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.781881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.781907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.782102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.782129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.782334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.782362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.782517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.782546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.782691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.782716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.782856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.782888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.783947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.783990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.784163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.784191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.784373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.784416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.784560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.784585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.784753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.784779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.784902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.784928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.785121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.785164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.785298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.785341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.785485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.785511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.785672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.785698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.785853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.785885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.786052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.786094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.786262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.786290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.786499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.786541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.786687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.786713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.786866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.786931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.787088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.787118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.787271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.787298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.787518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.787569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.787722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.787750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.787887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.787916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.788099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.788126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.788252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.788279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.442 qpair failed and we were unable to recover it. 00:34:54.442 [2024-07-15 07:03:41.788432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.442 [2024-07-15 07:03:41.788460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.788690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.788717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.788888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.788917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.789107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.789133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.789344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.789369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.789584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.789644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.789777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.789805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.789953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.789979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.790123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.790148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.790333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.790360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.790491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.790519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.790650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.790678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.790837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.790862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.791037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.791206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.791417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.791684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.791843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.791994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.792020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.792208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.792236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.792465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.792517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.792697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.792724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.792884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.792925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.793951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.793977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.794096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.794121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.794288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.794316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.794586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.794614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.794768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.794795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.794991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.795016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.795136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.795178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.795404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.795455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.795591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.795634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.795757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.795785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.795982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.796008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.796145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.796188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.443 [2024-07-15 07:03:41.796355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.443 [2024-07-15 07:03:41.796380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.443 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.796545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.796572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.796725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.796752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.796937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.796962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.797109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.797134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.797293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.797320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.797500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.797528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.797686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.797714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.797886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.797911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.798969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.798996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.799122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.799163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.799354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.799382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.799548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.799576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.799731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.799759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.799931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.799956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.800124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.800288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.800446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.800634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.800807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.800981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.801130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.801330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.801534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.801686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.801850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.801882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.802965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.444 [2024-07-15 07:03:41.802991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.444 qpair failed and we were unable to recover it. 00:34:54.444 [2024-07-15 07:03:41.803130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.803171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.803335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.803363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.806072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.806112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.806274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.806301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.806466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.806494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.806647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.806675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.806835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.806869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.807907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.807933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.808082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.808253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.808418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.808634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.808857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.808980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.809118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.809260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.809459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.809652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.809847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.809871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.810925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.810968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.811105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.811131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.811293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.811318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.811424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.811467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.811655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.811683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.811815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.811844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.812893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.445 [2024-07-15 07:03:41.812936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.445 qpair failed and we were unable to recover it. 00:34:54.445 [2024-07-15 07:03:41.813079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.813249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.813436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.813579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.813747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.813910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.813938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.814074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.814101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.814213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.814238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.814478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.814505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.814692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.814720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.814856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.814887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.815909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.815938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.816111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.816136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.816304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.816329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.816465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.816493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.816661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.816689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.816875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.816906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.817923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.817949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.818113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.818156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.818286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.818313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.818481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.818506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.818647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.818671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.818869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.818902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.819057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.446 [2024-07-15 07:03:41.819089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.446 qpair failed and we were unable to recover it. 00:34:54.446 [2024-07-15 07:03:41.819225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.819250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.819436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.819464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.819595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.819622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.819819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.819844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.820026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.820219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.820400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.820587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.820804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.820997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.821023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.821181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.821208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.821360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.821387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.821552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.821577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.821767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.821795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.821996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.822829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.822994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.823160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.823357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.823552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.823685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 799879 Killed "${NVMF_APP[@]}" "$@" 00:34:54.447 [2024-07-15 07:03:41.823854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.823885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.824012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.824037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:54.447 [2024-07-15 07:03:41.824156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.824181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:54.447 [2024-07-15 07:03:41.824320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.824345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:54.447 [2024-07-15 07:03:41.824487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.824512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:54.447 [2024-07-15 07:03:41.824657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.447 [2024-07-15 07:03:41.824682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.824817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.824845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.825896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.825941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.447 [2024-07-15 07:03:41.826079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.447 [2024-07-15 07:03:41.826104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.447 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.826238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.826266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.826434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.826459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.826597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.826622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.826736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.826761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.826931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.826960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.827104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.827130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.827248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.827274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.827461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.827489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.827678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.827706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.827845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.827870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.828000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.828166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=800430 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 800430 00:34:54.448 [2024-07-15 07:03:41.828393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.828537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 800430 ']' 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.448 [2024-07-15 07:03:41.828736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:54.448 [2024-07-15 07:03:41.828905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.828933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.448 [2024-07-15 07:03:41.829098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.829127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 07:03:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.448 [2024-07-15 07:03:41.829295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.829320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.829461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.829485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.829608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.829633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.829799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.829827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.829987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.830152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.830333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.830513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.830702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.830907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.830951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.831940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.831966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.832084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.832110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.832236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.832262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.832435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.832461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.448 qpair failed and we were unable to recover it. 00:34:54.448 [2024-07-15 07:03:41.832606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.448 [2024-07-15 07:03:41.832633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.832775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.832801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.832949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.832976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.833950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.833977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.834172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.834310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.834455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.834634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.834807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.834975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.835973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.835999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.836119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.836145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.836291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.836317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.836484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.836509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.836633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.836659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.836813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.836839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.837970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.837997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.838960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.838987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.839102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.839133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.839291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.449 [2024-07-15 07:03:41.839317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.449 qpair failed and we were unable to recover it. 00:34:54.449 [2024-07-15 07:03:41.839463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.839489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.839630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.839656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.839805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.839832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.840898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.840924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.841881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.841907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.842927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.842954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.843864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.843913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.844909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.450 [2024-07-15 07:03:41.844936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.450 qpair failed and we were unable to recover it. 00:34:54.450 [2024-07-15 07:03:41.845044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.845204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.845346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.845521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.845694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.845869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.845900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.846923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.846950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.847921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.847947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.848935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.848962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.849102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.849129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.849298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.849323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.849431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.849456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.849611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.849637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.849809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.849834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.850861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.850990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.851016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.851163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.851188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.851335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.851361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.451 [2024-07-15 07:03:41.851498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.451 [2024-07-15 07:03:41.851524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.451 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.851644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.851674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.851807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.851834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.851987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.852923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.852949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.853853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.853884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.854875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.854906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.855916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.855943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.856923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.856959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.857954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.857980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.858118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.858144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.452 [2024-07-15 07:03:41.858292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.452 [2024-07-15 07:03:41.858318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.452 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.858431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.858456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.858567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.858594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.858708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.858734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.858890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.858916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.859908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.859934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.860908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.860935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.861925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.861952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.862125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.862299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.862465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.862668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.862813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.862984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.863151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.863319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.863513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.863700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.863874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.863905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.864873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.864904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.865049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.865085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.453 qpair failed and we were unable to recover it. 00:34:54.453 [2024-07-15 07:03:41.865262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.453 [2024-07-15 07:03:41.865288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.865413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.865438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.865571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.865596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.865736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.865761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.865934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.865959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.866868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.866906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.867848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.867873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.868825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.868997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.869130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.869324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.869496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.869685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.869852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.869883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.870913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.870939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.871945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.871971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.454 qpair failed and we were unable to recover it. 00:34:54.454 [2024-07-15 07:03:41.872120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.454 [2024-07-15 07:03:41.872146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.872292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.872318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.872454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.872480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.872622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.872648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.872784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.872810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.872979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.873926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.873952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874375] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:54.455 [2024-07-15 07:03:41.874429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874439] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.455 [2024-07-15 07:03:41.874454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.874917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.874944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.875086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.875112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.875259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.875285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.455 qpair failed and we were unable to recover it. 00:34:54.455 [2024-07-15 07:03:41.875420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.455 [2024-07-15 07:03:41.875445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.875562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.875587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.875759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.875785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.875928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.875954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.876150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.876292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.876461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.876630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.876829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.876981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.877178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.877325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.877497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.877673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.877845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.877871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.878888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.878914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.879912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.879943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.880873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.880904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.881857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.881888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.882014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.882039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.882191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.882217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.456 qpair failed and we were unable to recover it. 00:34:54.456 [2024-07-15 07:03:41.882391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.456 [2024-07-15 07:03:41.882417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.882556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.882582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.882721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.882747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.882863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.882893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.883888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.883998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.884934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.884960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.885949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.885975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.886119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.886145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.886288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.886313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.886462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.886488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.886638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.886663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.886838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.886864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.887845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.887995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.888842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.888995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.889021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.457 qpair failed and we were unable to recover it. 00:34:54.457 [2024-07-15 07:03:41.889131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.457 [2024-07-15 07:03:41.889156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.889302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.889328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.889474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.889500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.889647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.889673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.889846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.889871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.890870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.890902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.891893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.891920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.892907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.892934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.893932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.893958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.894939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.894965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.895916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.895943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.458 qpair failed and we were unable to recover it. 00:34:54.458 [2024-07-15 07:03:41.896081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.458 [2024-07-15 07:03:41.896106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.896228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.896255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.896416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.896442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.896561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.896586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.896727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.896752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.896910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.896938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.897945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.897972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.898935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.899914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.899941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.900865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.900896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.459 [2024-07-15 07:03:41.901865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.459 qpair failed and we were unable to recover it. 00:34:54.459 [2024-07-15 07:03:41.901993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.902863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.902977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.903880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.903991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.904830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.904988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.905155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.905340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.905532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.905707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.905850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.905882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.906946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.906972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.907932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.907959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.908126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.908289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.908482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.908644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.460 [2024-07-15 07:03:41.908794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.460 qpair failed and we were unable to recover it. 00:34:54.460 [2024-07-15 07:03:41.908936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.908963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.909892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.909919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.910969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.910996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.911891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.911926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.912860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.912890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.913942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.913969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.914894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.914921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.915031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.915056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.915200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.915225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.915363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.461 [2024-07-15 07:03:41.915389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.461 qpair failed and we were unable to recover it. 00:34:54.461 [2024-07-15 07:03:41.915540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.915566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.915674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.915700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.915823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.915849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.916952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.916979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.917946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.917972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.918965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.918991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.919910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.919936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.920964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.920991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.921971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.921996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.922132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.462 [2024-07-15 07:03:41.922157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.462 qpair failed and we were unable to recover it. 00:34:54.462 [2024-07-15 07:03:41.922264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.922290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.922405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.922430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.922571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.922597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.922763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.922789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.922903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.922929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.923908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.923934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.924885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.924912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.925814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.925840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.926966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.926993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.927943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.927970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.928165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.928338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.928504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.928672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.463 [2024-07-15 07:03:41.928837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.463 qpair failed and we were unable to recover it. 00:34:54.463 [2024-07-15 07:03:41.928990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.929965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.929992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.930162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.930329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.930496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.930662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.930859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.930980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.931815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.931984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.932890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.932916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.933848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.933874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.464 qpair failed and we were unable to recover it. 00:34:54.464 [2024-07-15 07:03:41.934810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.464 [2024-07-15 07:03:41.934836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.934969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.934995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.935970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.935997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.936969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.936996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.937901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.937928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.938866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.938897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.939824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.939850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.940857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.940888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.941054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.941081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.941186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.941212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.941331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.941357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.465 [2024-07-15 07:03:41.941508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.465 [2024-07-15 07:03:41.941534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.465 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.941668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.941694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.941860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.941900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.942869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.942900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.943884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.943910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.944804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.944838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.945829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.945975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 [2024-07-15 07:03:41.946483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.946930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.946956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.947156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.947323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.947497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.947694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.947862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.947981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.948007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.948124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.948149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.466 [2024-07-15 07:03:41.948265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.466 [2024-07-15 07:03:41.948290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.466 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.948405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.948431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.948578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.948603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.948767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.948793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.948927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.948954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.949888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.949914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.950887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.950914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.951935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.951961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.952955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.952981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.953915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.953942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.954916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.954943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.467 [2024-07-15 07:03:41.955082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.467 [2024-07-15 07:03:41.955108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.467 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.955255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.955281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.955430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.955457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.955574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.955600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.955744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.955770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.955922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.955949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.956931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.956959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.957158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.957306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.957470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.957697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.957865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.957990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.958142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.958315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.958463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.958662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.958889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.958916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.959889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.959996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.960815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.960992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.961042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.468 qpair failed and we were unable to recover it. 00:34:54.468 [2024-07-15 07:03:41.961188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.468 [2024-07-15 07:03:41.961215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.961369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.961396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.961536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.961563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.961687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.961715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.961855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.961889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.962808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.962835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.963903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.963930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.964923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.964951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.965122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.965339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.965481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.965654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.965836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.965981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.966825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.966995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.967172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.967353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.967526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.967683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.967886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.967912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.968035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.469 [2024-07-15 07:03:41.968062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.469 qpair failed and we were unable to recover it. 00:34:54.469 [2024-07-15 07:03:41.968186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.968212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.968352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.968378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.968500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.968527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.968695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.968722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.968831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.968858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.969922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.969949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.970872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.970904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.971922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.971950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.972914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.972942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.973917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.973944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.974894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.974921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.470 [2024-07-15 07:03:41.975027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.470 [2024-07-15 07:03:41.975053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.470 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.975194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.975220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.975364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.975391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.975533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.975560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.975701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.975728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.975871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.975902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.976906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.976933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.977904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.977931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.978916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.978946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.979954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.979982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.980880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.980906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.981912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.981940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.982051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.982077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.982215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.982242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.982392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.982419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.471 qpair failed and we were unable to recover it. 00:34:54.471 [2024-07-15 07:03:41.982574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.471 [2024-07-15 07:03:41.982600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.982773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.982800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.982911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.982939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.983958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.983986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.984133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.984160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.984268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.984295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.984446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.984473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.984630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.984656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.984824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.984851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.985855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.985887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.986906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.986934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.987921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.987949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.988896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.988923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.472 qpair failed and we were unable to recover it. 00:34:54.472 [2024-07-15 07:03:41.989049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.472 [2024-07-15 07:03:41.989076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.989248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.989275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.989425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.989452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.989571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.989598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.989772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.989799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.989915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.989943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.990927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.990953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.991882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.991909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.992923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.992950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.993068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.993094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.993263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.993289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.993411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.993436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.993560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.473 [2024-07-15 07:03:41.993586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.473 qpair failed and we were unable to recover it. 00:34:54.473 [2024-07-15 07:03:41.993709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.993735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.993883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.993909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.994871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.994917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.995928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.995955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.996125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.996289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.996465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.996662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.996831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.996986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.997943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.997970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.998939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.998966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:41.999943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:41.999970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:42.000087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:42.000118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:42.000296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:42.000322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:42.000430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.474 [2024-07-15 07:03:42.000456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.474 qpair failed and we were unable to recover it. 00:34:54.474 [2024-07-15 07:03:42.000575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.000601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.000756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.000783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.000939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.000965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.001133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.001160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.001300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.001327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.001443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.001469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.001615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.001641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.001758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.001786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.002853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.002886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.003824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.003850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.004871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.004905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.005847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.005874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.006015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.006042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.006191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.006223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.006370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.475 [2024-07-15 07:03:42.006397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.475 qpair failed and we were unable to recover it. 00:34:54.475 [2024-07-15 07:03:42.006540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.006568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.006690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.006717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.006889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.006917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.007861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.007894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.008065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.008261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.008465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.008678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.008819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.008980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.009134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.009329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.009502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.009678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.009850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.009903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.010939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.010971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.011122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.011148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.011320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.011346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.011470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.011498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.011669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.011695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.011833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.011860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.012886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.012913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.013030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.013057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.013278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.013306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.013479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.013506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.013647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.476 [2024-07-15 07:03:42.013673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.476 qpair failed and we were unable to recover it. 00:34:54.476 [2024-07-15 07:03:42.013815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.013854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.014896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.014924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.015869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.015901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.016872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.016904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.477 qpair failed and we were unable to recover it. 00:34:54.477 [2024-07-15 07:03:42.017921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.477 [2024-07-15 07:03:42.017949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.755 qpair failed and we were unable to recover it. 00:34:54.755 [2024-07-15 07:03:42.018066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.018094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.018236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.018262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.018410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.018437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.018609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.018636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.018855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.018887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.019887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.019914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.020912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.020939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.021861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.021900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.022895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.022922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.023864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.023992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.024161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.024338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.024543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.024713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.024887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.756 [2024-07-15 07:03:42.024915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.756 qpair failed and we were unable to recover it. 00:34:54.756 [2024-07-15 07:03:42.025034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.025217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.025351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.025498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.025639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.025834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.026847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.026883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.027884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.027911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.028898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.028926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.029840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.029892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.030959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.030986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.031101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.031128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.031252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.031278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.031432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.031457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.031596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.031623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.757 [2024-07-15 07:03:42.031790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.757 [2024-07-15 07:03:42.031820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.757 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.031942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.031970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.032109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.032135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.032313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.032339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.032487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.032513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.032674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.032700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.032888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.033931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.033958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.034124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.034311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.034491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.034660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.034856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.034986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.035153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.035409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.035557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.035726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.035900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.035927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.036962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.036989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.037128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.037153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.037376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.037402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.037561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.037586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.037731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.037757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.037867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.037921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.038869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.038902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.758 [2024-07-15 07:03:42.039020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.758 [2024-07-15 07:03:42.039047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.758 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.039181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.039207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.039317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.039344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.039458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.039484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.039603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.039631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.039853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.759 [2024-07-15 07:03:42.040833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.759 [2024-07-15 07:03:42.040848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.759 [2024-07-15 07:03:42.040869] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.759 [2024-07-15 07:03:42.040893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.759 [2024-07-15 07:03:42.040958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.040985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.040971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:54.759 [2024-07-15 07:03:42.041023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:54.759 [2024-07-15 07:03:42.041106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 [2024-07-15 07:03:42.041068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.041073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:54.759 [2024-07-15 07:03:42.041281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.041455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.041630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.041945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.041975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.042930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.042957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c4390 (9): Bad file descriptor 00:34:54.759 [2024-07-15 07:03:42.043577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.043864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.043899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.044847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.044890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.759 [2024-07-15 07:03:42.045066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.759 [2024-07-15 07:03:42.045093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.759 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.045237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.045263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.045383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.045410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.045524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.045552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.045682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.045710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.045821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.045848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.046871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.046998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.047137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.047405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.047547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.047697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.047872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.047908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.048829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.048973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.049946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.049975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.050100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.050129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.050269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.050295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.760 [2024-07-15 07:03:42.050434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.760 [2024-07-15 07:03:42.050461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.760 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.050584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.050610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.050730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.050761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.050921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.050949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.051887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.051914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.052923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.052950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.053896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.053924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.054942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.054969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.055933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.055959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.056117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.056269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.056417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.056564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.761 [2024-07-15 07:03:42.056736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.761 qpair failed and we were unable to recover it. 00:34:54.761 [2024-07-15 07:03:42.056850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.056890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.057833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.057989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.058771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.058994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.059141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.059280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.059525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.059660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.059835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.059871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.060923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.060950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.061093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.061134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.061263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.061290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.061442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.061470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.061669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.061696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.061841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.061887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.062962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.062989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.063139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.063178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.063319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.063344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.063481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.063507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.762 [2024-07-15 07:03:42.063624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.762 [2024-07-15 07:03:42.063650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.762 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.063792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.063817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.064823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.064981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.065917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.065945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.066862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.066908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.067953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.067981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.068921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.068948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.069114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.069298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.069494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.069666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.069835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.069994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.070020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.070137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.070175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.763 [2024-07-15 07:03:42.070314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.763 [2024-07-15 07:03:42.070340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.763 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.070459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.070485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.070625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.070651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.070767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.070794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.070947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.070974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.071966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.071992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.072899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.072927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.764 [2024-07-15 07:03:42.073886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.764 [2024-07-15 07:03:42.073912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.764 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.074839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.074983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.075819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.075965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.076951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.076978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.077960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.077987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.078910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.078937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.765 [2024-07-15 07:03:42.079854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.765 qpair failed and we were unable to recover it. 00:34:54.765 [2024-07-15 07:03:42.079996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.080941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.080981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.081940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.081968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.082956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.082984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.083139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.083280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.083474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.083668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.083864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.083995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.084161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.084360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.084533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.084703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.084858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.084892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.085807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.085977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.086017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.086180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.086219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.086340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.086367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.086507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.086533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.766 [2024-07-15 07:03:42.086654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.766 [2024-07-15 07:03:42.086680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.766 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.086825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.086850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.087838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.087863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.088960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.088987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.089887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.089927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.090848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.090874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.091952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.091979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.092909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.092950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.093099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.093157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.767 qpair failed and we were unable to recover it. 00:34:54.767 [2024-07-15 07:03:42.093300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.767 [2024-07-15 07:03:42.093328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.093454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.093480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.093632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.093658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.093780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.093820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.093954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.093981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.094867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.094902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.095872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.095995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.096148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.096343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.096491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.096659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.096811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.096837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.097884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.097911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.098947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.098974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.099113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.099263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.099399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.099610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.768 [2024-07-15 07:03:42.099755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.768 qpair failed and we were unable to recover it. 00:34:54.768 [2024-07-15 07:03:42.099859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.099889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.100814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.100854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.101837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.101977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.102937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.102964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.103964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.103990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.104906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.104946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.105072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.105107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.105262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.769 [2024-07-15 07:03:42.105288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.769 qpair failed and we were unable to recover it. 00:34:54.769 [2024-07-15 07:03:42.105413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.105440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.105560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.105586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.105744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.105783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.105938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.105966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.106916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.106945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.107838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.107872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.108954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.108980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.109883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.109996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.110934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.110961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.111079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.111106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.111252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.111278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.111388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.111413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.111532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.111559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.770 qpair failed and we were unable to recover it. 00:34:54.770 [2024-07-15 07:03:42.111673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.770 [2024-07-15 07:03:42.111698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.111888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.111914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.112939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.112965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.113890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.113918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.114840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.114886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.115846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.115985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.116947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.116975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.117901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.117927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.118051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.771 [2024-07-15 07:03:42.118076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.771 qpair failed and we were unable to recover it. 00:34:54.771 [2024-07-15 07:03:42.118183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.118209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.118354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.118380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.118525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.118551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.118671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.118696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.118845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.118872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.119832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.119858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.120945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.120985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.121917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.121944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.122862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.122910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.123826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.123960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.124000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.124126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.124152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.124386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.124412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.772 [2024-07-15 07:03:42.124561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.772 [2024-07-15 07:03:42.124588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.772 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.124703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.124729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.124851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.124886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.125966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.125994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.126890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.126916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.127827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.127869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.128828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.128871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.129861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.129990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.130016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.773 [2024-07-15 07:03:42.130163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.773 [2024-07-15 07:03:42.130189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.773 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.130341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.130367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.130485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.130510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.130622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.130647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.130815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.130841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.130979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.131863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.131990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.132881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.132995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.133858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.133904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.134855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.134888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.135930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.135960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.136183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.136209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.136310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.136335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.136452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.136477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.136628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.136653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.774 qpair failed and we were unable to recover it. 00:34:54.774 [2024-07-15 07:03:42.136761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.774 [2024-07-15 07:03:42.136786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.137835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.137974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.138119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.138293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.138471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.138641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.138815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.138842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.139966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.139993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.140931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.140970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.141882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.141908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.142817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.142974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.143002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.143114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.143139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.143290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.775 [2024-07-15 07:03:42.143316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.775 qpair failed and we were unable to recover it. 00:34:54.775 [2024-07-15 07:03:42.143428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.143453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.143614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.143639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.143753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.143780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.143896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.143922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.144932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.144958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.145958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.145998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.146931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.146958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.147822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.147978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.148872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.148906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.149024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.149049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.149163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.149188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.149339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.149364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.149472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.776 [2024-07-15 07:03:42.149497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.776 qpair failed and we were unable to recover it. 00:34:54.776 [2024-07-15 07:03:42.149607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.149632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.149740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.149765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.149907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.149946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.150923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.150950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.151884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.151912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.152828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.152853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a0000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.153850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.153979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.154968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.154994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.155105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.155130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.777 qpair failed and we were unable to recover it. 00:34:54.777 [2024-07-15 07:03:42.155275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.777 [2024-07-15 07:03:42.155300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.155410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.155436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.155560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.155584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.155692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.155717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.155864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.155900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.156911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.156938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.157940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.157965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.158864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.158990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.159886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.159911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.778 [2024-07-15 07:03:42.160939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.778 [2024-07-15 07:03:42.160965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.778 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.161822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.161860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.162844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.162870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.163867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.163898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.164952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.164981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.165862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.165893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.166874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 [2024-07-15 07:03:42.166992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.779 [2024-07-15 07:03:42.167017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.779 qpair failed and we were unable to recover it. 00:34:54.779 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:54.779 [2024-07-15 07:03:42.167139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.167171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:54.780 [2024-07-15 07:03:42.167391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:54.780 [2024-07-15 07:03:42.167416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.780 [2024-07-15 07:03:42.167525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.167550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.780 [2024-07-15 07:03:42.167672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.167698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.167804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.167829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.167956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.167983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.168115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.168141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.168289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.168315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.168432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.168458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.168640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.168666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.168811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.168836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.169886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.169913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.170919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.170946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.171842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.171995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.172166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.172353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.172541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.172678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.780 [2024-07-15 07:03:42.172938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.780 [2024-07-15 07:03:42.172964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.780 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.173925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.173952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.174886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.174914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.175924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.175950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.176859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.176901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.177891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.177918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.178936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.178962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.781 [2024-07-15 07:03:42.179079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.781 [2024-07-15 07:03:42.179106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.781 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.179253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.179278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.179421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.179447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.179568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.179595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.179748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.179773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.179889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.179915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.180862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.180981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.181943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.181968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.182885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.182912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.183951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.183977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.184109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.184147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.184294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.184326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.782 [2024-07-15 07:03:42.184465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.782 [2024-07-15 07:03:42.184491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.782 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.184608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.184633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.184752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.184777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.184910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.184937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.185061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:54.783 [2024-07-15 07:03:42.185237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.783 [2024-07-15 07:03:42.185262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.783 [2024-07-15 07:03:42.185372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.185515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.185655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.185797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.185952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.185979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.186865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.186898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.187855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.187996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.188905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.188930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.189839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.189970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.190010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.190132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.190158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.190275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.190300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.783 qpair failed and we were unable to recover it. 00:34:54.783 [2024-07-15 07:03:42.190433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.783 [2024-07-15 07:03:42.190459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.190592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.190618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.190755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.190780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.190936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.190962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.191851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.191883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.192863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.192894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.193925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.193952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.194925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.194952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.195947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.195975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.196909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.784 [2024-07-15 07:03:42.196936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.784 qpair failed and we were unable to recover it. 00:34:54.784 [2024-07-15 07:03:42.197052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.197955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.197982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.198914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.198941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.199920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.199948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.200901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.200927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.201844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.201992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.202160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.202347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.202543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.202712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.202884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.202916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.203035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.203061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.203174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.203199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.203310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.203336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.203444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.785 [2024-07-15 07:03:42.203470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.785 qpair failed and we were unable to recover it. 00:34:54.785 [2024-07-15 07:03:42.203588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.203614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.203734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.203759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.203899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.203925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.204066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.204201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.204366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.204587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.204834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.204984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.205804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.205995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.206824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.206982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.207220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.207393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.207571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.207747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.207931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.207958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.208916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.208943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.209091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.209117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.209238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.786 [2024-07-15 07:03:42.209265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.786 qpair failed and we were unable to recover it. 00:34:54.786 [2024-07-15 07:03:42.209381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.209408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.209524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.209551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.209699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.209725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.209897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.209924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.210044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.210069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 Malloc0 00:34:54.787 [2024-07-15 07:03:42.210199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.210226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.210343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.210370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.210480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.210506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.787 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:54.787 [2024-07-15 07:03:42.210677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.210704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.210826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.787 [2024-07-15 07:03:42.210852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.787 [2024-07-15 07:03:42.210985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.211945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.211972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.212864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.212914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.213841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.213917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.787 [2024-07-15 07:03:42.213985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.214859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.214898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.215022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.215048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.215165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.215195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.787 [2024-07-15 07:03:42.215332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.787 [2024-07-15 07:03:42.215359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.787 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.215486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.215512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.215659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.215686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.215803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.215829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.215981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.216967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.216993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.217939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.217968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.218927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.218955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.219892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.219920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.220903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.220930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.221866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.788 [2024-07-15 07:03:42.221900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.788 qpair failed and we were unable to recover it. 00:34:54.788 [2024-07-15 07:03:42.222029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.222055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.789 [2024-07-15 07:03:42.222174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.222202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:54.789 [2024-07-15 07:03:42.222350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.222378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.789 [2024-07-15 07:03:42.222501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.789 [2024-07-15 07:03:42.222527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.222671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.222698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.222818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.222845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.222990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.223909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.223936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.224869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.224918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.225897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.225923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.226858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.226891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.227853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.227994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.789 [2024-07-15 07:03:42.228034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.789 qpair failed and we were unable to recover it. 00:34:54.789 [2024-07-15 07:03:42.228157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.228184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.228301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.228332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.228501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.228527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.228752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.228778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.228903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.228929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.229884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.229910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.230029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.230055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.230171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.790 [2024-07-15 07:03:42.230197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:54.790 [2024-07-15 07:03:42.230331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.230357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.790 [2024-07-15 07:03:42.230471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.230497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.790 [2024-07-15 07:03:42.230615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.230641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.230796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.230836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.230976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.231875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.231920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.232862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.232985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.233968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.233995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.234109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.234136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.234276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.790 [2024-07-15 07:03:42.234302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.790 qpair failed and we were unable to recover it. 00:34:54.790 [2024-07-15 07:03:42.234413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.234438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.234581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.234606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.234758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.234784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.234936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.234963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.235935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.235964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.236935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.236962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.237943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.237972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.238115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.238141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.791 [2024-07-15 07:03:42.238262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.238288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:54.791 [2024-07-15 07:03:42.238405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.238431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.791 [2024-07-15 07:03:42.238538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.791 [2024-07-15 07:03:42.238564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.238693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.238720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.238827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.238853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3a8000b90 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.791 qpair failed and we were unable to recover it. 00:34:54.791 [2024-07-15 07:03:42.239866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.791 [2024-07-15 07:03:42.239898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.240953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.240980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff398000b90 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.241935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.792 [2024-07-15 07:03:42.241976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b6840 with addr=10.0.0.2, port=4420 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.242147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.792 [2024-07-15 07:03:42.244659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.244805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.244848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.244863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.244898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.244938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.792 07:03:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 799905 00:34:54.792 [2024-07-15 07:03:42.254476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.254597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.254624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.254638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.254652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.254681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.264532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.264666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.264692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.264706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.264720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.264764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.274490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.274612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.274639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.274654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.274667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.274697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.284570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.284702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.284728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.284743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.284757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.284785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.294548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.294673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.294704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.294720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.294733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.294763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.304554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.304676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.304702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.304718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.304732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.304760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.314570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.314691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.314717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.314731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.792 [2024-07-15 07:03:42.314745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.792 [2024-07-15 07:03:42.314773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.792 qpair failed and we were unable to recover it. 00:34:54.792 [2024-07-15 07:03:42.324607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.792 [2024-07-15 07:03:42.324733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.792 [2024-07-15 07:03:42.324758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.792 [2024-07-15 07:03:42.324773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.793 [2024-07-15 07:03:42.324786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.793 [2024-07-15 07:03:42.324814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.793 qpair failed and we were unable to recover it. 00:34:54.793 [2024-07-15 07:03:42.334608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.793 [2024-07-15 07:03:42.334731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.793 [2024-07-15 07:03:42.334757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.793 [2024-07-15 07:03:42.334772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.793 [2024-07-15 07:03:42.334785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.793 [2024-07-15 07:03:42.334814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.793 qpair failed and we were unable to recover it. 00:34:54.793 [2024-07-15 07:03:42.344635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.793 [2024-07-15 07:03:42.344759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.793 [2024-07-15 07:03:42.344785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.793 [2024-07-15 07:03:42.344800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.793 [2024-07-15 07:03:42.344813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:54.793 [2024-07-15 07:03:42.344841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.793 qpair failed and we were unable to recover it. 00:34:55.054 [2024-07-15 07:03:42.354716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.054 [2024-07-15 07:03:42.354854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.054 [2024-07-15 07:03:42.354887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.054 [2024-07-15 07:03:42.354904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.054 [2024-07-15 07:03:42.354917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.054 [2024-07-15 07:03:42.354946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.054 qpair failed and we were unable to recover it. 00:34:55.054 [2024-07-15 07:03:42.364721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.054 [2024-07-15 07:03:42.364847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.054 [2024-07-15 07:03:42.364872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.054 [2024-07-15 07:03:42.364899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.054 [2024-07-15 07:03:42.364913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.054 [2024-07-15 07:03:42.364941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.054 qpair failed and we were unable to recover it. 00:34:55.054 [2024-07-15 07:03:42.374751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.374872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.374904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.374919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.374932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.374960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.384789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.384930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.384961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.384977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.384991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.385019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.394808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.394941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.394966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.394981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.394995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.395024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.404902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.405023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.405049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.405063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.405077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.405105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.414860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.414995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.415022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.415036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.415049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.415078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.424936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.425079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.425108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.425124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.425138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.425173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.434928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.435052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.435079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.435093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.435107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.435136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.445033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.445177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.445206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.445220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.445234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.445269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.454985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.455101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.455126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.455141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.455153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.455193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.465134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.465272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.465310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.465325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.465337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.465367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.475078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.475198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.475231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.475246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.475259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.475288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.485094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.485212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.485238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.485252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.485265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.485292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.495156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.495279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.495306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.495321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.495338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.495367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.505185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.505305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.055 [2024-07-15 07:03:42.505331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.055 [2024-07-15 07:03:42.505345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.055 [2024-07-15 07:03:42.505358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.055 [2024-07-15 07:03:42.505387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.055 qpair failed and we were unable to recover it. 00:34:55.055 [2024-07-15 07:03:42.515167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.055 [2024-07-15 07:03:42.515296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.515321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.515335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.515347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.515381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.525224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.525353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.525380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.525394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.525422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.525451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.535231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.535345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.535369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.535384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.535397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.535425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.545342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.545457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.545495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.545510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.545523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.545552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.555388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.555518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.555542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.555557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.555570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.555599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.565288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.565411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.565443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.565459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.565472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.565501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.575302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.575438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.575465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.575480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.575493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.575521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.585398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.585516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.585540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.585555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.585569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.585598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.595435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.595561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.595587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.595602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.595614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.595644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.605423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.605559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.605585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.605600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.605619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.605649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.615437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.615554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.615578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.615592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.615605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.615635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.625461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.625570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.625595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.625609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.625623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.625651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.635528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.635671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.635697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.635712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.635725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.635753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.645533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.645647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.056 [2024-07-15 07:03:42.645672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.056 [2024-07-15 07:03:42.645686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.056 [2024-07-15 07:03:42.645699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.056 [2024-07-15 07:03:42.645729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.056 qpair failed and we were unable to recover it. 00:34:55.056 [2024-07-15 07:03:42.655645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.056 [2024-07-15 07:03:42.655763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.057 [2024-07-15 07:03:42.655787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.057 [2024-07-15 07:03:42.655801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.057 [2024-07-15 07:03:42.655814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.057 [2024-07-15 07:03:42.655843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.057 qpair failed and we were unable to recover it. 00:34:55.057 [2024-07-15 07:03:42.665598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.057 [2024-07-15 07:03:42.665717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.057 [2024-07-15 07:03:42.665741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.057 [2024-07-15 07:03:42.665755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.057 [2024-07-15 07:03:42.665768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.057 [2024-07-15 07:03:42.665798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.057 qpair failed and we were unable to recover it. 00:34:55.318 [2024-07-15 07:03:42.675631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.318 [2024-07-15 07:03:42.675755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.318 [2024-07-15 07:03:42.675780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.318 [2024-07-15 07:03:42.675795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.318 [2024-07-15 07:03:42.675808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.318 [2024-07-15 07:03:42.675836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.318 qpair failed and we were unable to recover it. 00:34:55.318 [2024-07-15 07:03:42.685667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.318 [2024-07-15 07:03:42.685790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.318 [2024-07-15 07:03:42.685826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.318 [2024-07-15 07:03:42.685841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.318 [2024-07-15 07:03:42.685854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.318 [2024-07-15 07:03:42.685889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.318 qpair failed and we were unable to recover it. 00:34:55.318 [2024-07-15 07:03:42.695696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.318 [2024-07-15 07:03:42.695870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.318 [2024-07-15 07:03:42.695902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.318 [2024-07-15 07:03:42.695918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.318 [2024-07-15 07:03:42.695937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.318 [2024-07-15 07:03:42.695967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.318 qpair failed and we were unable to recover it. 00:34:55.318 [2024-07-15 07:03:42.705751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.318 [2024-07-15 07:03:42.705910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.318 [2024-07-15 07:03:42.705936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.705951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.705964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.705993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.715720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.715862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.715896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.715913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.715925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.715954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.725776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.725916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.725951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.725966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.725979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.726008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.735798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.735925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.735950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.735963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.735977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.736006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.745820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.745936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.745961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.745976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.745989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.746018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.755874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.756056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.756082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.756097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.756110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.756138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.765895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.766011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.766036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.766051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.766064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.766093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.775873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.775989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.776014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.776028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.776041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.776070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.785907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.786020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.786045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.786059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.786078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.786108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.796004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.796121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.796145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.796159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.796172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.796200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.805979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.806104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.806128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.806143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.806156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.806184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.816014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.816132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.816156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.816171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.816186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.816215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.826035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.826150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.826174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.826188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.826202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.319 [2024-07-15 07:03:42.826230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.319 qpair failed and we were unable to recover it. 00:34:55.319 [2024-07-15 07:03:42.836048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.319 [2024-07-15 07:03:42.836168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.319 [2024-07-15 07:03:42.836191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.319 [2024-07-15 07:03:42.836205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.319 [2024-07-15 07:03:42.836218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.836246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.846100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.846213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.846238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.846251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.846265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.846293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.856105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.856219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.856243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.856258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.856271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.856300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.866149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.866274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.866300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.866315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.866329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.866358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.876191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.876335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.876359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.876379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.876393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.876436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.886207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.886324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.886348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.886362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.886376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.886405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.896259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.896388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.896415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.896430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.896443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.896471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.906256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.906368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.906392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.906406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.906420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.906449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.916306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.916430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.916454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.916469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.916482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.916512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.320 [2024-07-15 07:03:42.926305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.320 [2024-07-15 07:03:42.926430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.320 [2024-07-15 07:03:42.926456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.320 [2024-07-15 07:03:42.926470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.320 [2024-07-15 07:03:42.926484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.320 [2024-07-15 07:03:42.926513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.320 qpair failed and we were unable to recover it. 00:34:55.580 [2024-07-15 07:03:42.936416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.580 [2024-07-15 07:03:42.936546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.936574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.936588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.936602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.936630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.946401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.946518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.946554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.946569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.946582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.946610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.956424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.956578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.956605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.956620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.956633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.956662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.966426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.966547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.966571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.966591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.966605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.966634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.976507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.976685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.976712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.976741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.976754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.976782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.986483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.986602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.986627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.986641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.986655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.986683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:42.996553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:42.996676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:42.996702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:42.996717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:42.996730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:42.996759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.006563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.006683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.006708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.006723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.006736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.006765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.016580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.016721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.016747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.016763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.016776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.016820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.026612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.026732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.026757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.026772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.026785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.026814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.036631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.036752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.036776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.036790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.036803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.036832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.046649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.046791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.046817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.046832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.046845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.046874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.056784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.056946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.056985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.057002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.057015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.057045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.581 qpair failed and we were unable to recover it. 00:34:55.581 [2024-07-15 07:03:43.066771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.581 [2024-07-15 07:03:43.066900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.581 [2024-07-15 07:03:43.066926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.581 [2024-07-15 07:03:43.066940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.581 [2024-07-15 07:03:43.066952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.581 [2024-07-15 07:03:43.066981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.076800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.076930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.076955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.076969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.076982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.077010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.086808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.086929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.086953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.086967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.086980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.087008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.096784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.096908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.096932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.096946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.096959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.096987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.106811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.106924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.106948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.106962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.106976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.107005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.116851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.116985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.117011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.117026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.117039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.117068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.126894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.127013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.127038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.127053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.127066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.127094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.136949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.137101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.137128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.137143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.137156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.137199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.146959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.147078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.147107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.147123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.147136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.147165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.156979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.157137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.157164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.157178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.157191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.157219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.167014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.167133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.167158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.167172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.167185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.167214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.177095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.177260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.177286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.177315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.177330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.177358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.582 [2024-07-15 07:03:43.187049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.582 [2024-07-15 07:03:43.187162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.582 [2024-07-15 07:03:43.187188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.582 [2024-07-15 07:03:43.187203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.582 [2024-07-15 07:03:43.187216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.582 [2024-07-15 07:03:43.187251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.582 qpair failed and we were unable to recover it. 00:34:55.842 [2024-07-15 07:03:43.197097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.842 [2024-07-15 07:03:43.197228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.842 [2024-07-15 07:03:43.197254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.842 [2024-07-15 07:03:43.197269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.842 [2024-07-15 07:03:43.197283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.842 [2024-07-15 07:03:43.197313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.842 qpair failed and we were unable to recover it. 00:34:55.842 [2024-07-15 07:03:43.207100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.842 [2024-07-15 07:03:43.207215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.842 [2024-07-15 07:03:43.207241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.842 [2024-07-15 07:03:43.207255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.207268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.207299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.217169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.217326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.217352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.217368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.217381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.217409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.227211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.227325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.227349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.227364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.227377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.227417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.237196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.237316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.237350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.237366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.237379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.237407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.247226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.247363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.247394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.247409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.247422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.247466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.257242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.257362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.257388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.257403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.257416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.257447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.267386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.267524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.267551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.267581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.267595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.267638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.277340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.277461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.277486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.277501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.277514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.277549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.287435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.287552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.287577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.287591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.287604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.287632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.297446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.297565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.297591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.297607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.297620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.297648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.307482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.307629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.307655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.307670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.307683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.307712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.317495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.317619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.317645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.317660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.317673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.317701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.843 [2024-07-15 07:03:43.327528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.843 [2024-07-15 07:03:43.327648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.843 [2024-07-15 07:03:43.327689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.843 [2024-07-15 07:03:43.327705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.843 [2024-07-15 07:03:43.327718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.843 [2024-07-15 07:03:43.327762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.843 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.337533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.337689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.337716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.337731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.337743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.337788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.347496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.347607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.347632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.347646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.347659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.347688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.357581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.357697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.357721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.357736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.357750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.357778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.367558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.367714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.367740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.367755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.367773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.367803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.377603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.377729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.377753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.377768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.377780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.377827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.387643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.387765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.387790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.387806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.387819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.387848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.397664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.397805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.397833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.397848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.397865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.397906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.407679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.407827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.407854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.407868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.407889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.407920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.417729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.417849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.417873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.417897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.417911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.417940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.427767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.427936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.427964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.427979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.427992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.428021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.437833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.437979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.438007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.438022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.438036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.438064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:55.844 [2024-07-15 07:03:43.447808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.844 [2024-07-15 07:03:43.447952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.844 [2024-07-15 07:03:43.447978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.844 [2024-07-15 07:03:43.447993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.844 [2024-07-15 07:03:43.448006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:55.844 [2024-07-15 07:03:43.448036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.844 qpair failed and we were unable to recover it. 00:34:56.103 [2024-07-15 07:03:43.457824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.103 [2024-07-15 07:03:43.457954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.103 [2024-07-15 07:03:43.457980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.103 [2024-07-15 07:03:43.457995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.458013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.458043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.467897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.468055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.468081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.468096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.468108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.468138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.477928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.478050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.478077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.478092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.478105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.478135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.487939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.488065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.488091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.488105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.488129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.488158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.497972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.498125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.498152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.498167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.498195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.498224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.507985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.508154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.508180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.508194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.508207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.508237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.518049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.518222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.518247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.518278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.518291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.518321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.528020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.528137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.528163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.528178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.528190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.528224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.538109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.538219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.538245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.538259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.538273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.538301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.548109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.548230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.548255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.548269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.548288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.548317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.558121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.558249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.558274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.558288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.558301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.558328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.568179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.568308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.568332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.568346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.568359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.568386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.578160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.578271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.578296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.578310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.578323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.578351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.588186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.588302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.588328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.588341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.588354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.588381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.104 [2024-07-15 07:03:43.598301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.104 [2024-07-15 07:03:43.598441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.104 [2024-07-15 07:03:43.598466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.104 [2024-07-15 07:03:43.598480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.104 [2024-07-15 07:03:43.598493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.104 [2024-07-15 07:03:43.598520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.104 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.608258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.608410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.608435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.608448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.608461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.608488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.618303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.618414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.618439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.618452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.618465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.618492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.628369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.628493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.628519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.628536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.628551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.628580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.638377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.638500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.638526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.638546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.638559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.638587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.648382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.648498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.648523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.648537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.648550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.648578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.658400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.658542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.658568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.658581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.658594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.658621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.668447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.668568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.668593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.668606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.668619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.668646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.678484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.678605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.678629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.678643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.678657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.678686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.688504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.688620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.688645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.688658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.688671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.688698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.698529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.698690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.698716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.698730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.698743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.698772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.105 [2024-07-15 07:03:43.708603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.105 [2024-07-15 07:03:43.708754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.105 [2024-07-15 07:03:43.708780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.105 [2024-07-15 07:03:43.708794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.105 [2024-07-15 07:03:43.708806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.105 [2024-07-15 07:03:43.708835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.105 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.718605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.718728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.718753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.718767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.718780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.718807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.728627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.728798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.728823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.728859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.728873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.728923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.738610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.738724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.738750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.738765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.738777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.738806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.748640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.748753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.748778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.748793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.748805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.748836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.758744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.758889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.758916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.758930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.758943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.758973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.768715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.768844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.768870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.768892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.768907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.768935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.778786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.778928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.778953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.778968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.778981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.779010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.788769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.788936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.788962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.788976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.788990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.789019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.798858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.799014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.799039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.799054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.799066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.799096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.808848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.808980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.809006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.809020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.809032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.809061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.818860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.365 [2024-07-15 07:03:43.819035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.365 [2024-07-15 07:03:43.819062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.365 [2024-07-15 07:03:43.819082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.365 [2024-07-15 07:03:43.819096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.365 [2024-07-15 07:03:43.819125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.365 qpair failed and we were unable to recover it. 00:34:56.365 [2024-07-15 07:03:43.828919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.829037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.829063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.829077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.829090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.829120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.838935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.839054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.839079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.839094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.839106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.839135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.849005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.849147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.849172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.849186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.849199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.849228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.858980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.859139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.859164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.859179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.859191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.859235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.869078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.869188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.869214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.869228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.869241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.869269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.879045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.879164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.879188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.879201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.879213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.879241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.889175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.889337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.889364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.889379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.889391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.889435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.899160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.899319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.899347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.899364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.899377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.899422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.909102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.909221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.909252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.909268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.909280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.909310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.919203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.919328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.919354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.919368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.919381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.919411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.929263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.929416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.929442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.929456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.929469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.366 [2024-07-15 07:03:43.929499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.366 qpair failed and we were unable to recover it. 00:34:56.366 [2024-07-15 07:03:43.939192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.366 [2024-07-15 07:03:43.939306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.366 [2024-07-15 07:03:43.939332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.366 [2024-07-15 07:03:43.939347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.366 [2024-07-15 07:03:43.939360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.367 [2024-07-15 07:03:43.939389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.367 qpair failed and we were unable to recover it. 00:34:56.367 [2024-07-15 07:03:43.949201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.367 [2024-07-15 07:03:43.949311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.367 [2024-07-15 07:03:43.949337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.367 [2024-07-15 07:03:43.949351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.367 [2024-07-15 07:03:43.949364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.367 [2024-07-15 07:03:43.949398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.367 qpair failed and we were unable to recover it. 00:34:56.367 [2024-07-15 07:03:43.959350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.367 [2024-07-15 07:03:43.959491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.367 [2024-07-15 07:03:43.959517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.367 [2024-07-15 07:03:43.959534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.367 [2024-07-15 07:03:43.959550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.367 [2024-07-15 07:03:43.959594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.367 qpair failed and we were unable to recover it. 00:34:56.367 [2024-07-15 07:03:43.969262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.367 [2024-07-15 07:03:43.969424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.367 [2024-07-15 07:03:43.969450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.367 [2024-07-15 07:03:43.969464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.367 [2024-07-15 07:03:43.969477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.367 [2024-07-15 07:03:43.969507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.367 qpair failed and we were unable to recover it. 00:34:56.625 [2024-07-15 07:03:43.979319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.625 [2024-07-15 07:03:43.979449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.625 [2024-07-15 07:03:43.979475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.625 [2024-07-15 07:03:43.979490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.625 [2024-07-15 07:03:43.979503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.625 [2024-07-15 07:03:43.979532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.625 qpair failed and we were unable to recover it. 00:34:56.625 [2024-07-15 07:03:43.989348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.625 [2024-07-15 07:03:43.989467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.625 [2024-07-15 07:03:43.989494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.625 [2024-07-15 07:03:43.989508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.625 [2024-07-15 07:03:43.989521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.625 [2024-07-15 07:03:43.989551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.625 qpair failed and we were unable to recover it. 00:34:56.625 [2024-07-15 07:03:43.999390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.625 [2024-07-15 07:03:43.999515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.625 [2024-07-15 07:03:43.999550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.625 [2024-07-15 07:03:43.999565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.625 [2024-07-15 07:03:43.999578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.625 [2024-07-15 07:03:43.999624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.625 qpair failed and we were unable to recover it. 00:34:56.625 [2024-07-15 07:03:44.009428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.625 [2024-07-15 07:03:44.009550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.625 [2024-07-15 07:03:44.009575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.625 [2024-07-15 07:03:44.009589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.625 [2024-07-15 07:03:44.009602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.625 [2024-07-15 07:03:44.009632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.625 qpair failed and we were unable to recover it. 00:34:56.625 [2024-07-15 07:03:44.019441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.625 [2024-07-15 07:03:44.019578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.019603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.019617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.019630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.019674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.029423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.029536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.029561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.029576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.029589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.029618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.039523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.039644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.039669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.039683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.039696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.039730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.049506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.049674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.049700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.049715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.049728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.049757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.059527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.059641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.059667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.059681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.059694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.059724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.069587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.069706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.069733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.069752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.069767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.069796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.079634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.079757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.079783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.079797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.079810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.079840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.089605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.089722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.089754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.089769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.089781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.089811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.099644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.099760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.099786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.099801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.099814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.099844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.109692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.109824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.109850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.109865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.109883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.626 [2024-07-15 07:03:44.109913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.626 qpair failed and we were unable to recover it. 00:34:56.626 [2024-07-15 07:03:44.119700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.626 [2024-07-15 07:03:44.119827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.626 [2024-07-15 07:03:44.119853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.626 [2024-07-15 07:03:44.119868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.626 [2024-07-15 07:03:44.119889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.119918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.129726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.129855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.129888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.129904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.129917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.129951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.139743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.139859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.139890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.139906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.139918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.139948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.149766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.149919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.149945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.149960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.149973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.150002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.159873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.160037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.160062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.160076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.160089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.160118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.169845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.169979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.170006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.170020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.170033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.170062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.179915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.180033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.180063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.180078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.180091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.180121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.189950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.190106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.190132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.190146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.190159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.190203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.199934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.200050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.200075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.200089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.200102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.200132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.209982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.210144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.210170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.210184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.210196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.210224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.220114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.627 [2024-07-15 07:03:44.220277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.627 [2024-07-15 07:03:44.220302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.627 [2024-07-15 07:03:44.220316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.627 [2024-07-15 07:03:44.220335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.627 [2024-07-15 07:03:44.220378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.627 qpair failed and we were unable to recover it. 00:34:56.627 [2024-07-15 07:03:44.230005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.628 [2024-07-15 07:03:44.230121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.628 [2024-07-15 07:03:44.230146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.628 [2024-07-15 07:03:44.230161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.628 [2024-07-15 07:03:44.230174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.628 [2024-07-15 07:03:44.230203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.628 qpair failed and we were unable to recover it. 00:34:56.886 [2024-07-15 07:03:44.240093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.886 [2024-07-15 07:03:44.240216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.886 [2024-07-15 07:03:44.240241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.886 [2024-07-15 07:03:44.240256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.886 [2024-07-15 07:03:44.240269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.886 [2024-07-15 07:03:44.240298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.886 qpair failed and we were unable to recover it. 00:34:56.886 [2024-07-15 07:03:44.250095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.886 [2024-07-15 07:03:44.250219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.886 [2024-07-15 07:03:44.250246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.886 [2024-07-15 07:03:44.250260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.886 [2024-07-15 07:03:44.250272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.886 [2024-07-15 07:03:44.250302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.886 qpair failed and we were unable to recover it. 00:34:56.886 [2024-07-15 07:03:44.260135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.886 [2024-07-15 07:03:44.260289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.886 [2024-07-15 07:03:44.260315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.886 [2024-07-15 07:03:44.260330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.886 [2024-07-15 07:03:44.260343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.260388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.270135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.270249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.270275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.270290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.270302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.270331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.280194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.280315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.280340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.280354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.280367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.280396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.290191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.290307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.290333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.290348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.290360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.290390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.300216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.300358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.300384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.300398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.300411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.300441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.310238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.310375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.310401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.310416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.310435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.310465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.320324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.320444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.320470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.320484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.320497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.320526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.330331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.330487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.330515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.330529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.330543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.330573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.340339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.340460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.340485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.340500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.340514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.340542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.350399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.350542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.350570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.350589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.350604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.350649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.360441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.360580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.360606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.887 [2024-07-15 07:03:44.360621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.887 [2024-07-15 07:03:44.360635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.887 [2024-07-15 07:03:44.360663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.887 qpair failed and we were unable to recover it. 00:34:56.887 [2024-07-15 07:03:44.370463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.887 [2024-07-15 07:03:44.370585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.887 [2024-07-15 07:03:44.370611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.370625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.370639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.370668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.380515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.380684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.380714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.380731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.380745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.380789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.390470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.390591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.390617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.390632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.390646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.390674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.400526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.400681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.400707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.400728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.400744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.400773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.410516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.410639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.410665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.410679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.410694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.410723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.420553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.420720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.420746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.420760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.420790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.420818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.430575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.430692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.430718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.430732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.430746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.430774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.440653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.440831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.440858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.440882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.440900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.440930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.450660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.450811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.450838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.450853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.450866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.450903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.460710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.460838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.460874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.460896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.460910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.460939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.470719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.470874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.470907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.470922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.470935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.888 [2024-07-15 07:03:44.470964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.888 qpair failed and we were unable to recover it. 00:34:56.888 [2024-07-15 07:03:44.480749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.888 [2024-07-15 07:03:44.480902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.888 [2024-07-15 07:03:44.480928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.888 [2024-07-15 07:03:44.480943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.888 [2024-07-15 07:03:44.480957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.889 [2024-07-15 07:03:44.480985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.889 qpair failed and we were unable to recover it. 00:34:56.889 [2024-07-15 07:03:44.490743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:56.889 [2024-07-15 07:03:44.490868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:56.889 [2024-07-15 07:03:44.490899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:56.889 [2024-07-15 07:03:44.490921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:56.889 [2024-07-15 07:03:44.490935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:56.889 [2024-07-15 07:03:44.490966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:56.889 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.500923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.501046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.501071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.501085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.501099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.501127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.510841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.510992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.511018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.511032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.511046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.511074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.520852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.520994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.521020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.521034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.521047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.521075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.530873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.531029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.531055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.531069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.531083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.531112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.540898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.541008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.541035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.541049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.541062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.541091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.550961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.551077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.551102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.551117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.551131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.551159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.560990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.561127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.561153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.561167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.561191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.561236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.570989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.571110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.571136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.571150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.571163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.571192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.581032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.581162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.581188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.581209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.581223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.581252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.591036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.591156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.591181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.591196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.591209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.591238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.149 [2024-07-15 07:03:44.601087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.149 [2024-07-15 07:03:44.601216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.149 [2024-07-15 07:03:44.601241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.149 [2024-07-15 07:03:44.601255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.149 [2024-07-15 07:03:44.601269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.149 [2024-07-15 07:03:44.601297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.149 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.611142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.611300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.611325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.611340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.611353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.611398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.621175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.621328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.621354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.621368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.621382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.621410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.631188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.631308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.631333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.631348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.631362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.631390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.641212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.641334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.641360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.641374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.641388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.641431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.651292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.651423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.651449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.651464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.651477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.651511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.661249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.661392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.661417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.661432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.661446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.661474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.671327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.671453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.671485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.671501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.671513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.671542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.681331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.681492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.681517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.681532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.681546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.681575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.691371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.691510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.691535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.691550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.691563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.691607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.701403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.701520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.701545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.701560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.150 [2024-07-15 07:03:44.701573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.150 [2024-07-15 07:03:44.701601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.150 qpair failed and we were unable to recover it. 00:34:57.150 [2024-07-15 07:03:44.711405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.150 [2024-07-15 07:03:44.711522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.150 [2024-07-15 07:03:44.711547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.150 [2024-07-15 07:03:44.711562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.711576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.711604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-07-15 07:03:44.721469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.151 [2024-07-15 07:03:44.721595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.151 [2024-07-15 07:03:44.721620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.151 [2024-07-15 07:03:44.721635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.721647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.721676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-07-15 07:03:44.731545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.151 [2024-07-15 07:03:44.731659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.151 [2024-07-15 07:03:44.731685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.151 [2024-07-15 07:03:44.731698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.731711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.731739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-07-15 07:03:44.741500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.151 [2024-07-15 07:03:44.741617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.151 [2024-07-15 07:03:44.741643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.151 [2024-07-15 07:03:44.741657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.741669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.741698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-07-15 07:03:44.751521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.151 [2024-07-15 07:03:44.751670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.151 [2024-07-15 07:03:44.751696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.151 [2024-07-15 07:03:44.751710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.751723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.751752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-07-15 07:03:44.761619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.151 [2024-07-15 07:03:44.761743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.151 [2024-07-15 07:03:44.761776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.151 [2024-07-15 07:03:44.761791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.151 [2024-07-15 07:03:44.761804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.151 [2024-07-15 07:03:44.761833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.412 [2024-07-15 07:03:44.771576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.412 [2024-07-15 07:03:44.771693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.412 [2024-07-15 07:03:44.771718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.412 [2024-07-15 07:03:44.771733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.412 [2024-07-15 07:03:44.771746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.412 [2024-07-15 07:03:44.771774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.412 qpair failed and we were unable to recover it. 00:34:57.412 [2024-07-15 07:03:44.781616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.412 [2024-07-15 07:03:44.781735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.412 [2024-07-15 07:03:44.781762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.412 [2024-07-15 07:03:44.781778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.412 [2024-07-15 07:03:44.781790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.412 [2024-07-15 07:03:44.781819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.412 qpair failed and we were unable to recover it. 00:34:57.412 [2024-07-15 07:03:44.791636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.412 [2024-07-15 07:03:44.791754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.412 [2024-07-15 07:03:44.791777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.412 [2024-07-15 07:03:44.791793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.412 [2024-07-15 07:03:44.791806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.412 [2024-07-15 07:03:44.791834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.412 qpair failed and we were unable to recover it. 00:34:57.412 [2024-07-15 07:03:44.801671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.412 [2024-07-15 07:03:44.801795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.412 [2024-07-15 07:03:44.801819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.412 [2024-07-15 07:03:44.801834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.412 [2024-07-15 07:03:44.801848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.412 [2024-07-15 07:03:44.801891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.412 qpair failed and we were unable to recover it. 00:34:57.412 [2024-07-15 07:03:44.811710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.811834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.811862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.811893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.811910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.811940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.821721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.821836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.821861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.821881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.821896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.821925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.831737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.831852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.831882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.831898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.831912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.831941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.841822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.842003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.842030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.842044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.842058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.842086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.851826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.851966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.851999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.852014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.852028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.852058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.861853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.861979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.862006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.862020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.862034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.862062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.871848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.871972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.871997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.872012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.872025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.872054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.881895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.882015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.882039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.882053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.882065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.882093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.891910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.892033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.892059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.892075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.892088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.892121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.901988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.902111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.902135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.902149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.902162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.902191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.912005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.912124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.912148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.912163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.912176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.912205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.922008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.922149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.922174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.922189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.922202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.922230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.932045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.932161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.932185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.932199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.932212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.932240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.942074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.942196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.413 [2024-07-15 07:03:44.942226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.413 [2024-07-15 07:03:44.942242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.413 [2024-07-15 07:03:44.942255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.413 [2024-07-15 07:03:44.942284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.413 qpair failed and we were unable to recover it. 00:34:57.413 [2024-07-15 07:03:44.952103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.413 [2024-07-15 07:03:44.952216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:44.952240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:44.952254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:44.952268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:44.952297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:44.962153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:44.962281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:44.962316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:44.962331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:44.962344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:44.962388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:44.972139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:44.972252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:44.972276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:44.972290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:44.972303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:44.972332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:44.982211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:44.982344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:44.982370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:44.982385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:44.982404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:44.982433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:44.992210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:44.992325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:44.992349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:44.992364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:44.992377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:44.992405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:45.002248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:45.002370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:45.002394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:45.002408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:45.002421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:45.002450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:45.012287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:45.012416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:45.012441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:45.012455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:45.012468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:45.012497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.414 [2024-07-15 07:03:45.022295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.414 [2024-07-15 07:03:45.022411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.414 [2024-07-15 07:03:45.022435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.414 [2024-07-15 07:03:45.022449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.414 [2024-07-15 07:03:45.022463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.414 [2024-07-15 07:03:45.022491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.414 qpair failed and we were unable to recover it. 00:34:57.675 [2024-07-15 07:03:45.032359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.675 [2024-07-15 07:03:45.032520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.675 [2024-07-15 07:03:45.032550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.675 [2024-07-15 07:03:45.032565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.675 [2024-07-15 07:03:45.032595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.675 [2024-07-15 07:03:45.032624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.675 qpair failed and we were unable to recover it. 00:34:57.675 [2024-07-15 07:03:45.042389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.675 [2024-07-15 07:03:45.042552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.675 [2024-07-15 07:03:45.042579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.675 [2024-07-15 07:03:45.042593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.675 [2024-07-15 07:03:45.042606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.675 [2024-07-15 07:03:45.042635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.675 qpair failed and we were unable to recover it. 00:34:57.675 [2024-07-15 07:03:45.052425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.675 [2024-07-15 07:03:45.052548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.675 [2024-07-15 07:03:45.052574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.675 [2024-07-15 07:03:45.052589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.675 [2024-07-15 07:03:45.052602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.675 [2024-07-15 07:03:45.052630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.675 qpair failed and we were unable to recover it. 00:34:57.675 [2024-07-15 07:03:45.062448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.062570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.062595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.062609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.062622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.062651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.072538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.072665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.072690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.072705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.072723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.072753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.082568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.082704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.082740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.082755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.082783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.082814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.092527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.092644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.092668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.092682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.092696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.092726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.102569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.102682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.102707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.102721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.102734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.102762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.112551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.112664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.112701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.112716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.112730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.112759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.122612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.122740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.122765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.122779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.122792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.122821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.132630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.132747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.132772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.132786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.132799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.132827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.142688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.142854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.142887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.142904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.142917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.142945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.152686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.152797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.152821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.152835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.152848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.152884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.162706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.676 [2024-07-15 07:03:45.162840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.676 [2024-07-15 07:03:45.162866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.676 [2024-07-15 07:03:45.162887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.676 [2024-07-15 07:03:45.162907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.676 [2024-07-15 07:03:45.162937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.676 qpair failed and we were unable to recover it. 00:34:57.676 [2024-07-15 07:03:45.172737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.172859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.172892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.172908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.172921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.172949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.182805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.182960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.182986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.183000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.183013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.183042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.192824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.192948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.192972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.192987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.193000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.193030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.202833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.202960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.202987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.203001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.203014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.203043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.212829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.212949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.212974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.212989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.213002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.213031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.222913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.223062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.223088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.223102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.223115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.223144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.232910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.233029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.233053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.233067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.233080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.233108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.242936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.243067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.243094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.243110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.243123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.243151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.253016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.253139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.253164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.253183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.253197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.253241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.262976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.263088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.263113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.263126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.677 [2024-07-15 07:03:45.263139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.677 [2024-07-15 07:03:45.263168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.677 qpair failed and we were unable to recover it. 00:34:57.677 [2024-07-15 07:03:45.273010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.677 [2024-07-15 07:03:45.273124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.677 [2024-07-15 07:03:45.273149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.677 [2024-07-15 07:03:45.273163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.678 [2024-07-15 07:03:45.273177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.678 [2024-07-15 07:03:45.273205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.678 qpair failed and we were unable to recover it. 00:34:57.678 [2024-07-15 07:03:45.283050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.678 [2024-07-15 07:03:45.283169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.678 [2024-07-15 07:03:45.283193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.678 [2024-07-15 07:03:45.283207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.678 [2024-07-15 07:03:45.283220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.678 [2024-07-15 07:03:45.283249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.678 qpair failed and we were unable to recover it. 00:34:57.937 [2024-07-15 07:03:45.293125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.937 [2024-07-15 07:03:45.293248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.937 [2024-07-15 07:03:45.293273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.937 [2024-07-15 07:03:45.293287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.937 [2024-07-15 07:03:45.293302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.937 [2024-07-15 07:03:45.293332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.937 qpair failed and we were unable to recover it. 00:34:57.937 [2024-07-15 07:03:45.303096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.937 [2024-07-15 07:03:45.303212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.937 [2024-07-15 07:03:45.303236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.937 [2024-07-15 07:03:45.303250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.937 [2024-07-15 07:03:45.303263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.937 [2024-07-15 07:03:45.303292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.937 qpair failed and we were unable to recover it. 00:34:57.937 [2024-07-15 07:03:45.313159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.937 [2024-07-15 07:03:45.313306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.937 [2024-07-15 07:03:45.313334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.937 [2024-07-15 07:03:45.313350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.937 [2024-07-15 07:03:45.313367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.937 [2024-07-15 07:03:45.313412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.937 qpair failed and we were unable to recover it. 00:34:57.937 [2024-07-15 07:03:45.323166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.323284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.323308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.323323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.323336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.323365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.333185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.333296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.333321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.333336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.333349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.333377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.343312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.343453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.343478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.343498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.343511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.343555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.353232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.353357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.353384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.353399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.353412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.353441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.363311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.363432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.363457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.363471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.363484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.363513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.373327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.373447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.373472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.373485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.373499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.373528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.383343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.383502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.383529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.383545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.383573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.383602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.393355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.393471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.393496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.393510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.393523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.393552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.403388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.403511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.403537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.403553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.403566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.403595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.413426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.413545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.413569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.413583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.413596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.413625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.423459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.938 [2024-07-15 07:03:45.423588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.938 [2024-07-15 07:03:45.423612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.938 [2024-07-15 07:03:45.423627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.938 [2024-07-15 07:03:45.423639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.938 [2024-07-15 07:03:45.423668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.938 qpair failed and we were unable to recover it. 00:34:57.938 [2024-07-15 07:03:45.433509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.433629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.433670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.433685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.433699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.433727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.443537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.443653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.443677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.443692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.443705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.443733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.453524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.453643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.453678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.453693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.453707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.453735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.463569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.463720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.463746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.463761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.463791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.463821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.473615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.473735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.473762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.473777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.473789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.473818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.483666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.483830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.483856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.483871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.483892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.483922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.493638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.493749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.493775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.493790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.493803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.493832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.503676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.503795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.503821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.503835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.503847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.503882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.513737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.513919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.513950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.513967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.513980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.514010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.523787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.523947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.523990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.524006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.524020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.939 [2024-07-15 07:03:45.524049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.939 qpair failed and we were unable to recover it. 00:34:57.939 [2024-07-15 07:03:45.533782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.939 [2024-07-15 07:03:45.533957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.939 [2024-07-15 07:03:45.533984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.939 [2024-07-15 07:03:45.534003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.939 [2024-07-15 07:03:45.534015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.940 [2024-07-15 07:03:45.534044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.940 qpair failed and we were unable to recover it. 00:34:57.940 [2024-07-15 07:03:45.543784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.940 [2024-07-15 07:03:45.543904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.940 [2024-07-15 07:03:45.543930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.940 [2024-07-15 07:03:45.543945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.940 [2024-07-15 07:03:45.543958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:57.940 [2024-07-15 07:03:45.543988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.940 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.553823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.553936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.553962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.553977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.553990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.554020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.563906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.564028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.564054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.564068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.564081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.564116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.573908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.574034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.574061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.574079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.574094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.574124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.583891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.584008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.584034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.584048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.584061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.584090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.593918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.594031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.594057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.594071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.594085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.594114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.604005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.604132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.604158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.604172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.604185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.604214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.613997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.614127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.614170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.614185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.614198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.614227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.624005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.624129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.624156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.624170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.624184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.624213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.634071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.634247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.634273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.634289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.634317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.634346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.644063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.200 [2024-07-15 07:03:45.644184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.200 [2024-07-15 07:03:45.644208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.200 [2024-07-15 07:03:45.644223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.200 [2024-07-15 07:03:45.644236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.200 [2024-07-15 07:03:45.644265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-07-15 07:03:45.654182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.654301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.654325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.654339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.654352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.654386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.664108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.664229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.664256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.664270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.664284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.664312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.674145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.674278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.674306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.674321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.674339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.674384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.684179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.684298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.684322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.684337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.684350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.684379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.694186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.694298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.694322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.694336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.694349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.694377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.704242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.704378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.704410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.704426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.704439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.704482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.714328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.714442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.714467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.714481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.714494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.714523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.724298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.724417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.724443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.724457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.724470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.724499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.734349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.734475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.734501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.734515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.734528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.734557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.744337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.744450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.744477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.744491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.201 [2024-07-15 07:03:45.744510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.201 [2024-07-15 07:03:45.744539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.201 qpair failed and we were unable to recover it. 00:34:58.201 [2024-07-15 07:03:45.754445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.201 [2024-07-15 07:03:45.754594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.201 [2024-07-15 07:03:45.754619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.201 [2024-07-15 07:03:45.754634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.754647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.754676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.202 [2024-07-15 07:03:45.764421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.202 [2024-07-15 07:03:45.764540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.202 [2024-07-15 07:03:45.764565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.202 [2024-07-15 07:03:45.764579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.764592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.764622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.202 [2024-07-15 07:03:45.774455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.202 [2024-07-15 07:03:45.774597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.202 [2024-07-15 07:03:45.774623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.202 [2024-07-15 07:03:45.774638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.774650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.774694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.202 [2024-07-15 07:03:45.784441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.202 [2024-07-15 07:03:45.784590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.202 [2024-07-15 07:03:45.784616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.202 [2024-07-15 07:03:45.784631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.784643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.784671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.202 [2024-07-15 07:03:45.794579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.202 [2024-07-15 07:03:45.794701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.202 [2024-07-15 07:03:45.794742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.202 [2024-07-15 07:03:45.794757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.794770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.794798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.202 [2024-07-15 07:03:45.804528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.202 [2024-07-15 07:03:45.804657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.202 [2024-07-15 07:03:45.804683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.202 [2024-07-15 07:03:45.804697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.202 [2024-07-15 07:03:45.804709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.202 [2024-07-15 07:03:45.804754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.202 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.814561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.814682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.814708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.814723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.814737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.814766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.824566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.824683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.824709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.824724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.824738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.824766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.834579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.834703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.834729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.834743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.834762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.834791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.844627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.844754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.844781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.844795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.844807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.844835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.854618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.854736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.854761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.854776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.854791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.854819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.864696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.864827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.864855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.864870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.864893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.864926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.874684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.874794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.874820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.874835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.874849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.874884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.884761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.884895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.884920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.884934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.884946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.463 [2024-07-15 07:03:45.884973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.463 qpair failed and we were unable to recover it. 00:34:58.463 [2024-07-15 07:03:45.894751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.463 [2024-07-15 07:03:45.894869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.463 [2024-07-15 07:03:45.894902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.463 [2024-07-15 07:03:45.894916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.463 [2024-07-15 07:03:45.894929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.894958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.904789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.904917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.904943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.904958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.904970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.904998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.914817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.914936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.914962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.914977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.914990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.915020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.924835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.924973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.924998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.925013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.925032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.925062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.934847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.934970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.934995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.935010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.935023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.935053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.944920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.945042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.945069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.945083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.945096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.945125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.954919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.955067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.955092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.955107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.955120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.955149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.964993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.965115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.965141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.965155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.965168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.965198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.974979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.975096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.975122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.975136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.975149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.975178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.985037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.985151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.985177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.985191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.985204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.985233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:45.995070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:45.995203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:45.995228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:45.995243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:45.995256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:45.995285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:46.005094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.464 [2024-07-15 07:03:46.005261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.464 [2024-07-15 07:03:46.005287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.464 [2024-07-15 07:03:46.005302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.464 [2024-07-15 07:03:46.005330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.464 [2024-07-15 07:03:46.005358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.464 qpair failed and we were unable to recover it. 00:34:58.464 [2024-07-15 07:03:46.015108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.015226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.015251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.015270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.015285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.015313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.025136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.025252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.025278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.025292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.025305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.025335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.035219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.035368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.035394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.035409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.035421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.035452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.045184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.045324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.045351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.045366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.045384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.045429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.055243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.055365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.055391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.055405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.055418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.055448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.065237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.065350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.065377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.065392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.065404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.065434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.465 [2024-07-15 07:03:46.075277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.465 [2024-07-15 07:03:46.075393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.465 [2024-07-15 07:03:46.075419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.465 [2024-07-15 07:03:46.075433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.465 [2024-07-15 07:03:46.075446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.465 [2024-07-15 07:03:46.075476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.465 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.085313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.085443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.085468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.085483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.085496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.085525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.095351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.095509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.095535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.095550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.095562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.095591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.105389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.105505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.105531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.105551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.105566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.105594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.115439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.115555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.115581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.115595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.115608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.115637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.125483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.125609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.125635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.125649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.125661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.125705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.135509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.135626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.135651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.135665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.135678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.135706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.725 qpair failed and we were unable to recover it. 00:34:58.725 [2024-07-15 07:03:46.145494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.725 [2024-07-15 07:03:46.145614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.725 [2024-07-15 07:03:46.145640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.725 [2024-07-15 07:03:46.145654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.725 [2024-07-15 07:03:46.145667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.725 [2024-07-15 07:03:46.145697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.155494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.155609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.155634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.155649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.155662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.155691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.165573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.165742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.165767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.165781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.165794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.165824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.175563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.175676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.175702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.175716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.175728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.175756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.185593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.185703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.185729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.185743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.185755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.185785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.195651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.195807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.195833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.195852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.195889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.195919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.205687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.205814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.205839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.205854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.205866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.205903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.215683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.215845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.215871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.215896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.215912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.215940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.225717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.225832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.225857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.225872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.225892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.225922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.235753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.235912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.235937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.235952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.235965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.235995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.245773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.245915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.726 [2024-07-15 07:03:46.245940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.726 [2024-07-15 07:03:46.245955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.726 [2024-07-15 07:03:46.245967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.726 [2024-07-15 07:03:46.245996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.726 qpair failed and we were unable to recover it. 00:34:58.726 [2024-07-15 07:03:46.255864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.726 [2024-07-15 07:03:46.256027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.256053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.256068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.256081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.256110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.265920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.266039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.266064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.266078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.266091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.266120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.275842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.275959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.275985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.275999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.276012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.276041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.285903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.286022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.286054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.286070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.286083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.286112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.295921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.296035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.296060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.296074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.296086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.296116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.305937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.306056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.306081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.306095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.306109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.306138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.316007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.316124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.316150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.316165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.316178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.316208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.326032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.326150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.326175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.326190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.326202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.326237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.727 [2024-07-15 07:03:46.336045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.727 [2024-07-15 07:03:46.336192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.727 [2024-07-15 07:03:46.336217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.727 [2024-07-15 07:03:46.336232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.727 [2024-07-15 07:03:46.336245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.727 [2024-07-15 07:03:46.336289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.727 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.346093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.346225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.346251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.346265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.346278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.346307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.356113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.356243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.356268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.356283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.356296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.356325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.366130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.366290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.366315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.366329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.366342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.366372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.376117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.376232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.376264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.376279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.376292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.376321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.386168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.386280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.386306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.386320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.386333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.386363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.396206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.396327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.396352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.396367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.396380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.396409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.406252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.987 [2024-07-15 07:03:46.406377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.987 [2024-07-15 07:03:46.406402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.987 [2024-07-15 07:03:46.406416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.987 [2024-07-15 07:03:46.406429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.987 [2024-07-15 07:03:46.406458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.987 qpair failed and we were unable to recover it. 00:34:58.987 [2024-07-15 07:03:46.416237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.416349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.416374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.416389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.416402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.416436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.426335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.426459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.426484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.426499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.426512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.426542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.436291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.436402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.436427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.436441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.436454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.436484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.446370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.446497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.446523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.446537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.446550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.446579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.456355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.456470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.456495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.456509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.456522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.456551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.466417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.466577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.466608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.466623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.466636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.466666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.476410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.476525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.476550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.476564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.476577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.476606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.486447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.486563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.486588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.486603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.486615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.486645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.496445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.496567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.496592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.496607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.496620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.496649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.506498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.506607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.506633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.506647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.506662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.506696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.516591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.516741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.516767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.516782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.988 [2024-07-15 07:03:46.516795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.988 [2024-07-15 07:03:46.516824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.988 qpair failed and we were unable to recover it. 00:34:58.988 [2024-07-15 07:03:46.526550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.988 [2024-07-15 07:03:46.526668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.988 [2024-07-15 07:03:46.526693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.988 [2024-07-15 07:03:46.526707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.526720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.526749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.536609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.536746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.536772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.536786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.536799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.536828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.546617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.546729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.546755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.546769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.546782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.546812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.556637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.556749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.556780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.556795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.556810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.556838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.566680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.566800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.566826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.566840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.566853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.566894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.576740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.576887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.576914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.576928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.576941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.576970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.586777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.586900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.586926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.586941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.586954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.586983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:58.989 [2024-07-15 07:03:46.596780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.989 [2024-07-15 07:03:46.596905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.989 [2024-07-15 07:03:46.596933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.989 [2024-07-15 07:03:46.596950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.989 [2024-07-15 07:03:46.596971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:58.989 [2024-07-15 07:03:46.597001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.989 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.606803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.606934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.606960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.606976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.606989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.607020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.616819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.616946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.616972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.616986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.616999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.617028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.626827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.626939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.626965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.626979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.626992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.627022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.636868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.636994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.637020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.637034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.637048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.637077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.646960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.647090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.647115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.647129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.647143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.647172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.656925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.657049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.657074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.657088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.657102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.249 [2024-07-15 07:03:46.657130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.249 qpair failed and we were unable to recover it. 00:34:59.249 [2024-07-15 07:03:46.666991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.249 [2024-07-15 07:03:46.667115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.249 [2024-07-15 07:03:46.667140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.249 [2024-07-15 07:03:46.667161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.249 [2024-07-15 07:03:46.667173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.667201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.677040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.677208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.677234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.677256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.677268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.677296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.687047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.687174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.687199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.687214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.687233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.687263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.697073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.697206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.697232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.697246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.697260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.697288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.707100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.707244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.707271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.707285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.707299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.707327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.717141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.717269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.717294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.717309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.717322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.717350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.727145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.727267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.727292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.727307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.727320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.727348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.737198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.737332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.737359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.737373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.737387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.737415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.747240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.747367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.747392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.747407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.747420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.747449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.757240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.757354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.757379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.757393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.757408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.757436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.767309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.767436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.767461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.767476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.767489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.250 [2024-07-15 07:03:46.767517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.250 qpair failed and we were unable to recover it. 00:34:59.250 [2024-07-15 07:03:46.777348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.250 [2024-07-15 07:03:46.777520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.250 [2024-07-15 07:03:46.777547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.250 [2024-07-15 07:03:46.777567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.250 [2024-07-15 07:03:46.777581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.777609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.787326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.787445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.787471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.787485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.787498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.787527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.797398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.797568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.797593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.797608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.797622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.797650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.807414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.807538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.807564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.807579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.807592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.807621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.817416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.817535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.817561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.817575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.817589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.817617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.827445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.827562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.827588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.827602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.827616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.827644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.837504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.837620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.837646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.837661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.837674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.837702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.847521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.847653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.847679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.847693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.847705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.847733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.251 [2024-07-15 07:03:46.857525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.251 [2024-07-15 07:03:46.857647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.251 [2024-07-15 07:03:46.857672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.251 [2024-07-15 07:03:46.857686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.251 [2024-07-15 07:03:46.857700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.251 [2024-07-15 07:03:46.857728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.251 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.867616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.867769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.867794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.867814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.867828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.867862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.877656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.877811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.877847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.877861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.877875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.877923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.887648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.887775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.887799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.887813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.887825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.887853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.897660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.897807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.897835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.897850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.897874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.897916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.907683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.907807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.907833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.907848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.907872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.907911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.917703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.917828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.917854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.917868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.917896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.917926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.927743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.927887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.927914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.927928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.927940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.927971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.937829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.937977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.938002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.938016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.938030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.938059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.947797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.947961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.947986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.948001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.948015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.948043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.513 [2024-07-15 07:03:46.957872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.513 [2024-07-15 07:03:46.958002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.513 [2024-07-15 07:03:46.958027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.513 [2024-07-15 07:03:46.958048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.513 [2024-07-15 07:03:46.958064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.513 [2024-07-15 07:03:46.958092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.513 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:46.967936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:46.968067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:46.968091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:46.968105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:46.968119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:46.968147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:46.977918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:46.978043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:46.978069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:46.978083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:46.978096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:46.978125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:46.987903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:46.988020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:46.988045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:46.988059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:46.988073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:46.988103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:46.997956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:46.998072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:46.998097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:46.998111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:46.998124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:46.998152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.008008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.008149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.008174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.008188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.008201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.008239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.017988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.018123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.018150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.018165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.018177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.018205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.028018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.028134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.028159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.028174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.028187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.028215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.038071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.038183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.038208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.038222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.038235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.038264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.048089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.048207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.048241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.048257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.048271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.048302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.058213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.058336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.058363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.058378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.058392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.058421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.068142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.068253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.068277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.068292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.068305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.068334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.078182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.078297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.078321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.078336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.078349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.078378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.088198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.088318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.088342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.088356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.088370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.088398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.098275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.098393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.514 [2024-07-15 07:03:47.098417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.514 [2024-07-15 07:03:47.098431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.514 [2024-07-15 07:03:47.098445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.514 [2024-07-15 07:03:47.098473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.514 qpair failed and we were unable to recover it. 00:34:59.514 [2024-07-15 07:03:47.108304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.514 [2024-07-15 07:03:47.108420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.515 [2024-07-15 07:03:47.108444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.515 [2024-07-15 07:03:47.108458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.515 [2024-07-15 07:03:47.108472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.515 [2024-07-15 07:03:47.108500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.515 qpair failed and we were unable to recover it. 00:34:59.515 [2024-07-15 07:03:47.118301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.515 [2024-07-15 07:03:47.118411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.515 [2024-07-15 07:03:47.118434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.515 [2024-07-15 07:03:47.118448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.515 [2024-07-15 07:03:47.118462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.515 [2024-07-15 07:03:47.118490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.515 qpair failed and we were unable to recover it. 00:34:59.775 [2024-07-15 07:03:47.128318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.128438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.128462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.128476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.128489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.128517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.138375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.138508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.138539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.138555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.138568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.138611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.148397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.148514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.148539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.148553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.148566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.148595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.158387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.158505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.158529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.158543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.158556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.158585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.168467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.168588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.168613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.168627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.168640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.168670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.178469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.178587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.178611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.178625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.178638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.178672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.188500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.188670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.188699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.188714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.188747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.188776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.198501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.198661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.198689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.198704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.198717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.198746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.208625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.208792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.208819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.208834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.208847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.208875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.218602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.218716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.218740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.218754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.218767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.218796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.228573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.228688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.228719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.228734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.228748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.228776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.238610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.238721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.238745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.238759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.238774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.238803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.248649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.248773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.248808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.248823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.248836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.248865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.258737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.776 [2024-07-15 07:03:47.258863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.776 [2024-07-15 07:03:47.258896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.776 [2024-07-15 07:03:47.258915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.776 [2024-07-15 07:03:47.258929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.776 [2024-07-15 07:03:47.258959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.776 qpair failed and we were unable to recover it. 00:34:59.776 [2024-07-15 07:03:47.268677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.268790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.268815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.268829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.268843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.268884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.278729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.278850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.278874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.278896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.278911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.278940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.288752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.288870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.288900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.288914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.288927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.288956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.298777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.298904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.298928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.298943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.298956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.298984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.308852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.308984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.309010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.309025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.309038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.309067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.318828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.318947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.318976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.318992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.319005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.319033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.328942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.329083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.329109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.329123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.329137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.329166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.338896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.339024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.339051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.339065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.339078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.339106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.348924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.349039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.349063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.349077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.349090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.349119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.358973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.359126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.359152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.359167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.359185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.359230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.369019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.369149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.369175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.369190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.369218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.369250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:34:59.777 [2024-07-15 07:03:47.379027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.777 [2024-07-15 07:03:47.379158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.777 [2024-07-15 07:03:47.379183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.777 [2024-07-15 07:03:47.379197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.777 [2024-07-15 07:03:47.379211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:34:59.777 [2024-07-15 07:03:47.379240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:59.777 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.389086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.389208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.389234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.389248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.389261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.389290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.399077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.399240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.399267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.399282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.399295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.399323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.409149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.409308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.409349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.409364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.409377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.409419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.419150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.419269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.419293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.419307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.419321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.419350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.429153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.429269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.429304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.429319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.429332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.429361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.439196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.439346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.439372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.439387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.039 [2024-07-15 07:03:47.439401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.039 [2024-07-15 07:03:47.439429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.039 qpair failed and we were unable to recover it. 00:35:00.039 [2024-07-15 07:03:47.449213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.039 [2024-07-15 07:03:47.449330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.039 [2024-07-15 07:03:47.449354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.039 [2024-07-15 07:03:47.449368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.449387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.449416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.459260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.459382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.459407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.459421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.459434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.459463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.469314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.469443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.469469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.469484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.469497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.469525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.479301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.479429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.479456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.479470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.479483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.479511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.489312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.489431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.489454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.489469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.489482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.489511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.499364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.499484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.499509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.499523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.499538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.499567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.509357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.509476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.509503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.509518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.509531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.509558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.519431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.519543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.519569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.519583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.519599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.519631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.529447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.529577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.529604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.529618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.529631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.529660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.539462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.539578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.539603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.539618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.539636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.539666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.549524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.549644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.549671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.549689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.549704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.549735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.559539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.559654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.559679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.559694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.559707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.559737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.569570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.569687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.569711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.569726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.569738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.569767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.579618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.579749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.579775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.579789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.579802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.579831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.040 [2024-07-15 07:03:47.589627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.040 [2024-07-15 07:03:47.589750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.040 [2024-07-15 07:03:47.589777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.040 [2024-07-15 07:03:47.589791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.040 [2024-07-15 07:03:47.589804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.040 [2024-07-15 07:03:47.589832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.040 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.599648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.599762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.599787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.599802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.599815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.599844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.609728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.609873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.609906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.609921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.609934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.609963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.619694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.619824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.619851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.619869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.619893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.619923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.629712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.629828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.629853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.629873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.629894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.629923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.639762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.639881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.639906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.639920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.639933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.639961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.041 [2024-07-15 07:03:47.649793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.041 [2024-07-15 07:03:47.649921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.041 [2024-07-15 07:03:47.649945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.041 [2024-07-15 07:03:47.649960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.041 [2024-07-15 07:03:47.649974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.041 [2024-07-15 07:03:47.650002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.041 qpair failed and we were unable to recover it. 00:35:00.301 [2024-07-15 07:03:47.659803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.301 [2024-07-15 07:03:47.659923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.301 [2024-07-15 07:03:47.659948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.301 [2024-07-15 07:03:47.659962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.301 [2024-07-15 07:03:47.659975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.660004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.669852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.669972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.670000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.670015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.670029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.670057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.679942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.680068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.680094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.680110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.680126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.680156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.689940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.690062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.690099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.690113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.690127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.690156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.699933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.700050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.700075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.700089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.700102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.700132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.709987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.710100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.710125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.710140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.710153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.710182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.719994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.720108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.720133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.720153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.720167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.720195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.730011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.730137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.730163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.730178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.730191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.730219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.740051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.740176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.740203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.740218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.740231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.740260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.750068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.750183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.750210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.750225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.750238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.750267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.760090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.760204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.760231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.760246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.760259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.760287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.770133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.770250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.770276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.770291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.770304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.770333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.780180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.780299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.780325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.780339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.780353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.780383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.790166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.790282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.790308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.790323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.790336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.790366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.302 qpair failed and we were unable to recover it. 00:35:00.302 [2024-07-15 07:03:47.800190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.302 [2024-07-15 07:03:47.800306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.302 [2024-07-15 07:03:47.800332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.302 [2024-07-15 07:03:47.800347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.302 [2024-07-15 07:03:47.800360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.302 [2024-07-15 07:03:47.800388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.810236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.810356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.810391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.810407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.810421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.810464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.820236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.820351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.820376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.820390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.820403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.820432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.830281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.830393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.830419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.830433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.830446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.830475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.840331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.840462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.840490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.840505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.840523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.840569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.850342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.850472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.850498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.850513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.850526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.850555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.860363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.860482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.860508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.860522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.860537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.860566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.870406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.870522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.870548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.870563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.870576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.870606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.880455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.880614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.880641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.880656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.880670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.880698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.890495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.890661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.890686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.890700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.890712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.890740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.900473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.900596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.900628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.900644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.900657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.900686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.303 [2024-07-15 07:03:47.910530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.303 [2024-07-15 07:03:47.910675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.303 [2024-07-15 07:03:47.910702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.303 [2024-07-15 07:03:47.910717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.303 [2024-07-15 07:03:47.910730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.303 [2024-07-15 07:03:47.910773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.303 qpair failed and we were unable to recover it. 00:35:00.562 [2024-07-15 07:03:47.920528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.562 [2024-07-15 07:03:47.920672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.562 [2024-07-15 07:03:47.920699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.562 [2024-07-15 07:03:47.920714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.562 [2024-07-15 07:03:47.920727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.562 [2024-07-15 07:03:47.920755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.562 qpair failed and we were unable to recover it. 00:35:00.562 [2024-07-15 07:03:47.930572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.562 [2024-07-15 07:03:47.930702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.562 [2024-07-15 07:03:47.930729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.562 [2024-07-15 07:03:47.930743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.562 [2024-07-15 07:03:47.930756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.562 [2024-07-15 07:03:47.930784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.562 qpair failed and we were unable to recover it. 00:35:00.562 [2024-07-15 07:03:47.940581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.562 [2024-07-15 07:03:47.940718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.562 [2024-07-15 07:03:47.940744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.940760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.940773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.940807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:47.950634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:47.950759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:47.950784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.950799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.950813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.950841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:47.960649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:47.960763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:47.960787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.960802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.960816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.960844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:47.970675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:47.970797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:47.970823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.970837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.970851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.970887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:47.980743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:47.980871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:47.980904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.980919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.980931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.980960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:47.990748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:47.990904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:47.990947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:47.990962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:47.990975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:47.991004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.000794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.000937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.000964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.000978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.000991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.001021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.010804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.010929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.010955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.010969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.010982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.011012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.020937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.021101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.021127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.021141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.021154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.021198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.030851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.030973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.030999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.031014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.031027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.031061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.040866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.041018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.041044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.041059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.041071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.041101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.050916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.051036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.051062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.051076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.051090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.051119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.060932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.061052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.061077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.061092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.061104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21b6840 00:35:00.563 [2024-07-15 07:03:48.061133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.070987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.071156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.071190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.071206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.071221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3a0000b90 00:35:00.563 [2024-07-15 07:03:48.071253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:00.563 qpair failed and we were unable to recover it. 00:35:00.563 [2024-07-15 07:03:48.081015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.563 [2024-07-15 07:03:48.081128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.563 [2024-07-15 07:03:48.081160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.563 [2024-07-15 07:03:48.081176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.563 [2024-07-15 07:03:48.081189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3a0000b90 00:35:00.564 [2024-07-15 07:03:48.081222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:00.564 qpair failed and we were unable to recover it. 00:35:00.564 [2024-07-15 07:03:48.091050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.564 [2024-07-15 07:03:48.091184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.564 [2024-07-15 07:03:48.091217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.564 [2024-07-15 07:03:48.091233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.564 [2024-07-15 07:03:48.091248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3a8000b90 00:35:00.564 [2024-07-15 07:03:48.091279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:00.564 qpair failed and we were unable to recover it. 00:35:00.564 [2024-07-15 07:03:48.101076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.564 [2024-07-15 07:03:48.101196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.564 [2024-07-15 07:03:48.101224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.564 [2024-07-15 07:03:48.101240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.564 [2024-07-15 07:03:48.101252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3a8000b90 00:35:00.564 [2024-07-15 07:03:48.101284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:00.564 qpair failed and we were unable to recover it. 00:35:00.564 [2024-07-15 07:03:48.111113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.564 [2024-07-15 07:03:48.111282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.564 [2024-07-15 07:03:48.111314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.564 [2024-07-15 07:03:48.111330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.564 [2024-07-15 07:03:48.111344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff398000b90 00:35:00.564 [2024-07-15 07:03:48.111376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.564 qpair failed and we were unable to recover it. 00:35:00.564 [2024-07-15 07:03:48.121160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.564 [2024-07-15 07:03:48.121275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.564 [2024-07-15 07:03:48.121304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.564 [2024-07-15 07:03:48.121320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.564 [2024-07-15 07:03:48.121339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff398000b90 00:35:00.564 [2024-07-15 07:03:48.121382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.564 qpair failed and we were unable to recover it. 00:35:00.564 [2024-07-15 07:03:48.121477] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:00.564 A controller has encountered a failure and is being reset. 00:35:00.564 Controller properly reset. 00:35:00.564 Initializing NVMe Controllers 00:35:00.564 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:00.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:00.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:00.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:00.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:00.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:00.564 Initialization complete. Launching workers. 00:35:00.564 Starting thread on core 1 00:35:00.564 Starting thread on core 2 00:35:00.564 Starting thread on core 3 00:35:00.564 Starting thread on core 0 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:00.564 00:35:00.564 real 0m10.740s 00:35:00.564 user 0m18.287s 00:35:00.564 sys 0m5.350s 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:00.564 ************************************ 00:35:00.564 END TEST nvmf_target_disconnect_tc2 00:35:00.564 ************************************ 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.564 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.564 rmmod nvme_tcp 00:35:00.823 rmmod nvme_fabrics 00:35:00.823 rmmod nvme_keyring 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 800430 ']' 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 800430 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 800430 ']' 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 800430 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 800430 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 800430' 00:35:00.823 killing process with pid 800430 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 800430 00:35:00.823 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 800430 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:01.082 07:03:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.987 07:03:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:02.987 00:35:02.987 real 0m15.427s 00:35:02.987 user 0m44.224s 00:35:02.987 sys 0m7.314s 00:35:02.987 07:03:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:02.987 07:03:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 ************************************ 00:35:02.987 END TEST nvmf_target_disconnect 00:35:02.987 ************************************ 00:35:02.987 07:03:50 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:35:02.987 07:03:50 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.987 07:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 07:03:50 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:35:02.987 00:35:02.987 real 27m1.416s 00:35:02.987 user 74m29.798s 00:35:02.987 sys 6m17.167s 00:35:02.987 07:03:50 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:02.987 07:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 ************************************ 00:35:02.987 END TEST nvmf_tcp 00:35:02.987 ************************************ 00:35:02.987 07:03:50 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:35:02.988 07:03:50 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:02.988 07:03:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:02.988 07:03:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:02.988 07:03:50 -- common/autotest_common.sh@10 -- # set +x 00:35:03.247 ************************************ 00:35:03.247 START TEST spdkcli_nvmf_tcp 00:35:03.247 ************************************ 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:03.247 * Looking for test storage... 00:35:03.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=801626 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 801626 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 801626 ']' 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:03.247 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.247 [2024-07-15 07:03:50.717556] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:03.247 [2024-07-15 07:03:50.717658] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801626 ] 00:35:03.247 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.247 [2024-07-15 07:03:50.779916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:03.506 [2024-07-15 07:03:50.873673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.506 [2024-07-15 07:03:50.873678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.506 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:03.506 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:35:03.506 07:03:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:03.506 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.506 07:03:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.506 07:03:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:03.506 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:03.506 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:03.506 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:03.506 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:03.506 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:03.506 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:03.506 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:03.506 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:03.506 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:03.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:03.506 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:03.506 ' 00:35:06.041 [2024-07-15 07:03:53.518058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.418 [2024-07-15 07:03:54.734315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:10.007 [2024-07-15 07:03:56.989482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:11.383 [2024-07-15 07:03:58.939548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:13.284 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:13.284 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:13.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:13.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:13.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:13.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:13.284 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:13.284 07:04:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:13.542 07:04:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.542 07:04:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:13.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:13.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:13.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:13.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:13.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:13.542 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:13.542 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:13.542 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:13.542 ' 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:18.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:18.805 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:18.805 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:18.805 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 801626 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 801626 ']' 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 801626 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 801626 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 801626' 00:35:18.805 killing process with pid 801626 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 801626 00:35:18.805 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 801626 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 801626 ']' 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 801626 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 801626 ']' 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 801626 00:35:19.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (801626) - No such process 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 801626 is not found' 00:35:19.062 Process with pid 801626 is not found 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:19.062 00:35:19.062 real 0m15.903s 00:35:19.062 user 0m33.513s 00:35:19.062 sys 0m0.827s 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:19.062 07:04:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.062 ************************************ 00:35:19.062 END TEST spdkcli_nvmf_tcp 00:35:19.062 ************************************ 00:35:19.062 07:04:06 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:19.062 07:04:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:19.062 07:04:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:19.062 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:35:19.062 ************************************ 00:35:19.062 START TEST nvmf_identify_passthru 00:35:19.062 ************************************ 00:35:19.062 07:04:06 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:19.062 * Looking for test storage... 00:35:19.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.062 07:04:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.062 07:04:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.062 07:04:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.062 07:04:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.062 07:04:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.062 07:04:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.062 07:04:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.062 07:04:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:19.062 07:04:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:19.062 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:19.063 07:04:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.063 07:04:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.063 07:04:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.063 07:04:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.063 07:04:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.063 07:04:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.063 07:04:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.063 07:04:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:19.063 07:04:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.063 07:04:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.063 07:04:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.063 07:04:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:19.063 07:04:06 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:19.063 07:04:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:20.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:20.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:20.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:20.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.968 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:21.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:35:21.229 00:35:21.229 --- 10.0.0.2 ping statistics --- 00:35:21.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.229 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:35:21.229 00:35:21.229 --- 10.0.0.1 ping statistics --- 00:35:21.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.229 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:21.229 07:04:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:35:21.229 07:04:08 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:21.229 07:04:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:21.229 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.426 07:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:25.426 07:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:25.426 07:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:25.426 07:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:25.426 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=806135 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:29.617 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 806135 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 806135 ']' 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:29.617 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.617 [2024-07-15 07:04:17.160352] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:29.617 [2024-07-15 07:04:17.160445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.617 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.617 [2024-07-15 07:04:17.224642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:29.877 [2024-07-15 07:04:17.309660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.877 [2024-07-15 07:04:17.309710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.877 [2024-07-15 07:04:17.309733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.877 [2024-07-15 07:04:17.309744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.877 [2024-07-15 07:04:17.309754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.877 [2024-07-15 07:04:17.309810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.877 [2024-07-15 07:04:17.309871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.877 [2024-07-15 07:04:17.310002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:29.877 [2024-07-15 07:04:17.310006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:29.877 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.877 INFO: Log level set to 20 00:35:29.877 INFO: Requests: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "method": "nvmf_set_config", 00:35:29.877 "id": 1, 00:35:29.877 "params": { 00:35:29.877 "admin_cmd_passthru": { 00:35:29.877 "identify_ctrlr": true 00:35:29.877 } 00:35:29.877 } 00:35:29.877 } 00:35:29.877 00:35:29.877 INFO: response: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "id": 1, 00:35:29.877 "result": true 00:35:29.877 } 00:35:29.877 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.877 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.877 INFO: Setting log level to 20 00:35:29.877 INFO: Setting log level to 20 00:35:29.877 INFO: Log level set to 20 00:35:29.877 INFO: Log level set to 20 00:35:29.877 INFO: Requests: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "method": "framework_start_init", 00:35:29.877 "id": 1 00:35:29.877 } 00:35:29.877 00:35:29.877 INFO: Requests: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "method": "framework_start_init", 00:35:29.877 "id": 1 00:35:29.877 } 00:35:29.877 00:35:29.877 [2024-07-15 07:04:17.473246] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:29.877 INFO: response: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "id": 1, 00:35:29.877 "result": true 00:35:29.877 } 00:35:29.877 00:35:29.877 INFO: response: 00:35:29.877 { 00:35:29.877 "jsonrpc": "2.0", 00:35:29.877 "id": 1, 00:35:29.877 "result": true 00:35:29.877 } 00:35:29.877 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.877 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.877 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.877 INFO: Setting log level to 40 00:35:29.877 INFO: Setting log level to 40 00:35:29.877 INFO: Setting log level to 40 00:35:29.877 [2024-07-15 07:04:17.483338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.136 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.136 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:30.136 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.136 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:30.136 07:04:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:30.136 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.136 07:04:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.458 Nvme0n1 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.458 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.458 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.458 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.458 [2024-07-15 07:04:20.380772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.458 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.458 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.458 [ 00:35:33.458 { 00:35:33.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:33.458 "subtype": "Discovery", 00:35:33.458 "listen_addresses": [], 00:35:33.458 "allow_any_host": true, 00:35:33.458 "hosts": [] 00:35:33.458 }, 00:35:33.458 { 00:35:33.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.458 "subtype": "NVMe", 00:35:33.458 "listen_addresses": [ 00:35:33.458 { 00:35:33.458 "trtype": "TCP", 00:35:33.458 "adrfam": "IPv4", 00:35:33.458 "traddr": "10.0.0.2", 00:35:33.458 "trsvcid": "4420" 00:35:33.458 } 00:35:33.458 ], 00:35:33.458 "allow_any_host": true, 00:35:33.458 "hosts": [], 00:35:33.458 "serial_number": "SPDK00000000000001", 00:35:33.458 "model_number": "SPDK bdev Controller", 00:35:33.458 "max_namespaces": 1, 00:35:33.458 "min_cntlid": 1, 00:35:33.458 "max_cntlid": 65519, 00:35:33.458 "namespaces": [ 00:35:33.458 { 00:35:33.458 "nsid": 1, 00:35:33.458 "bdev_name": "Nvme0n1", 00:35:33.458 "name": "Nvme0n1", 00:35:33.458 "nguid": "4DFB933D3ECE43EB95B011F6EAF15234", 00:35:33.458 "uuid": "4dfb933d-3ece-43eb-95b0-11f6eaf15234" 00:35:33.458 } 00:35:33.458 ] 00:35:33.458 } 00:35:33.458 ] 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:33.459 07:04:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:33.459 rmmod nvme_tcp 00:35:33.459 rmmod nvme_fabrics 00:35:33.459 rmmod nvme_keyring 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 806135 ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 806135 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 806135 ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 806135 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 806135 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 806135' 00:35:33.459 killing process with pid 806135 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 806135 00:35:33.459 07:04:20 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 806135 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:35.381 07:04:22 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.381 07:04:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:35.381 07:04:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.290 07:04:24 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:37.290 00:35:37.290 real 0m18.008s 00:35:37.290 user 0m27.200s 00:35:37.290 sys 0m2.249s 00:35:37.290 07:04:24 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:37.290 07:04:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.290 ************************************ 00:35:37.290 END TEST nvmf_identify_passthru 00:35:37.290 ************************************ 00:35:37.290 07:04:24 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:37.290 07:04:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:37.290 07:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:37.290 07:04:24 -- common/autotest_common.sh@10 -- # set +x 00:35:37.290 ************************************ 00:35:37.290 START TEST nvmf_dif 00:35:37.290 ************************************ 00:35:37.290 07:04:24 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:37.290 * Looking for test storage... 00:35:37.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.290 07:04:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.290 07:04:24 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.290 07:04:24 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.290 07:04:24 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.290 07:04:24 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.290 07:04:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.290 07:04:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.290 07:04:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.290 07:04:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:37.291 07:04:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:37.291 07:04:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:37.291 07:04:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:37.291 07:04:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:37.291 07:04:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:37.291 07:04:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.291 07:04:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.291 07:04:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:37.291 07:04:24 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:37.291 07:04:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:39.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:39.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:39.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:39.197 07:04:26 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:39.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:39.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:35:39.198 00:35:39.198 --- 10.0.0.2 ping statistics --- 00:35:39.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.198 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:35:39.198 00:35:39.198 --- 10.0.0.1 ping statistics --- 00:35:39.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.198 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:39.198 07:04:26 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:40.577 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:40.577 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:40.577 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:40.577 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:40.577 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:40.577 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:40.577 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:40.577 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:40.577 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:40.577 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:40.577 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:40.577 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:40.577 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:40.577 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:40.577 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:40.577 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:40.577 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:40.577 07:04:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:40.577 07:04:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=809392 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:40.577 07:04:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 809392 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 809392 ']' 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:40.577 07:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.577 [2024-07-15 07:04:28.082751] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:40.577 [2024-07-15 07:04:28.082845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.577 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.578 [2024-07-15 07:04:28.148240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.836 [2024-07-15 07:04:28.232963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.836 [2024-07-15 07:04:28.233012] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.836 [2024-07-15 07:04:28.233040] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.836 [2024-07-15 07:04:28.233053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.836 [2024-07-15 07:04:28.233063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.836 [2024-07-15 07:04:28.233096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:40.836 07:04:28 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 07:04:28 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.836 07:04:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:40.836 07:04:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 [2024-07-15 07:04:28.374297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.836 07:04:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 ************************************ 00:35:40.836 START TEST fio_dif_1_default 00:35:40.836 ************************************ 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 bdev_null0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.836 [2024-07-15 07:04:28.434620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:40.836 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.837 { 00:35:40.837 "params": { 00:35:40.837 "name": "Nvme$subsystem", 00:35:40.837 "trtype": "$TEST_TRANSPORT", 00:35:40.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.837 "adrfam": "ipv4", 00:35:40.837 "trsvcid": "$NVMF_PORT", 00:35:40.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.837 "hdgst": ${hdgst:-false}, 00:35:40.837 "ddgst": ${ddgst:-false} 00:35:40.837 }, 00:35:40.837 "method": "bdev_nvme_attach_controller" 00:35:40.837 } 00:35:40.837 EOF 00:35:40.837 )") 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:40.837 07:04:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.837 "params": { 00:35:40.837 "name": "Nvme0", 00:35:40.837 "trtype": "tcp", 00:35:40.837 "traddr": "10.0.0.2", 00:35:40.837 "adrfam": "ipv4", 00:35:40.837 "trsvcid": "4420", 00:35:40.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.837 "hdgst": false, 00:35:40.837 "ddgst": false 00:35:40.837 }, 00:35:40.837 "method": "bdev_nvme_attach_controller" 00:35:40.837 }' 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:41.095 07:04:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.095 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:41.095 fio-3.35 00:35:41.095 Starting 1 thread 00:35:41.353 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.569 00:35:53.569 filename0: (groupid=0, jobs=1): err= 0: pid=809616: Mon Jul 15 07:04:39 2024 00:35:53.569 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:35:53.569 slat (nsec): min=6612, max=89747, avg=8992.81, stdev=4489.22 00:35:53.569 clat (usec): min=40857, max=46269, avg=40997.86, stdev=342.25 00:35:53.569 lat (usec): min=40866, max=46312, avg=41006.85, stdev=342.97 00:35:53.569 clat percentiles (usec): 00:35:53.569 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:53.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:53.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:53.569 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:35:53.569 | 99.99th=[46400] 00:35:53.569 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:35:53.569 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:53.569 lat (msec) : 50=100.00% 00:35:53.569 cpu : usr=89.95%, sys=9.78%, ctx=14, majf=0, minf=277 00:35:53.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.569 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:53.569 00:35:53.569 Run status group 0 (all jobs): 00:35:53.569 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 00:35:53.569 real 0m11.174s 00:35:53.569 user 0m10.219s 00:35:53.569 sys 0m1.242s 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 ************************************ 00:35:53.569 END TEST fio_dif_1_default 00:35:53.569 ************************************ 00:35:53.569 07:04:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:53.569 07:04:39 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:53.569 07:04:39 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 ************************************ 00:35:53.569 START TEST fio_dif_1_multi_subsystems 00:35:53.569 ************************************ 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 bdev_null0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 [2024-07-15 07:04:39.651822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 bdev_null1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.569 { 00:35:53.569 "params": { 00:35:53.569 "name": "Nvme$subsystem", 00:35:53.569 "trtype": "$TEST_TRANSPORT", 00:35:53.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.569 "adrfam": "ipv4", 00:35:53.569 "trsvcid": "$NVMF_PORT", 00:35:53.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.569 "hdgst": ${hdgst:-false}, 00:35:53.569 "ddgst": ${ddgst:-false} 00:35:53.569 }, 00:35:53.569 "method": "bdev_nvme_attach_controller" 00:35:53.569 } 00:35:53.569 EOF 00:35:53.569 )") 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:53.569 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.570 { 00:35:53.570 "params": { 00:35:53.570 "name": "Nvme$subsystem", 00:35:53.570 "trtype": "$TEST_TRANSPORT", 00:35:53.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.570 "adrfam": "ipv4", 00:35:53.570 "trsvcid": "$NVMF_PORT", 00:35:53.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.570 "hdgst": ${hdgst:-false}, 00:35:53.570 "ddgst": ${ddgst:-false} 00:35:53.570 }, 00:35:53.570 "method": "bdev_nvme_attach_controller" 00:35:53.570 } 00:35:53.570 EOF 00:35:53.570 )") 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.570 "params": { 00:35:53.570 "name": "Nvme0", 00:35:53.570 "trtype": "tcp", 00:35:53.570 "traddr": "10.0.0.2", 00:35:53.570 "adrfam": "ipv4", 00:35:53.570 "trsvcid": "4420", 00:35:53.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.570 "hdgst": false, 00:35:53.570 "ddgst": false 00:35:53.570 }, 00:35:53.570 "method": "bdev_nvme_attach_controller" 00:35:53.570 },{ 00:35:53.570 "params": { 00:35:53.570 "name": "Nvme1", 00:35:53.570 "trtype": "tcp", 00:35:53.570 "traddr": "10.0.0.2", 00:35:53.570 "adrfam": "ipv4", 00:35:53.570 "trsvcid": "4420", 00:35:53.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:53.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:53.570 "hdgst": false, 00:35:53.570 "ddgst": false 00:35:53.570 }, 00:35:53.570 "method": "bdev_nvme_attach_controller" 00:35:53.570 }' 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.570 07:04:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.570 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:53.570 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:53.570 fio-3.35 00:35:53.570 Starting 2 threads 00:35:53.570 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.561 00:36:03.561 filename0: (groupid=0, jobs=1): err= 0: pid=811013: Mon Jul 15 07:04:50 2024 00:36:03.561 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:36:03.561 slat (nsec): min=7063, max=57673, avg=8955.55, stdev=2985.84 00:36:03.561 clat (usec): min=781, max=45941, avg=21075.75, stdev=20139.56 00:36:03.561 lat (usec): min=788, max=45998, avg=21084.71, stdev=20139.33 00:36:03.561 clat percentiles (usec): 00:36:03.561 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 840], 00:36:03.561 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:36:03.561 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:03.561 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:36:03.561 | 99.99th=[45876] 00:36:03.561 bw ( KiB/s): min= 672, max= 768, per=49.95%, avg=759.58, stdev=25.78, samples=19 00:36:03.561 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:36:03.561 lat (usec) : 1000=49.37% 00:36:03.561 lat (msec) : 2=0.42%, 50=50.21% 00:36:03.561 cpu : usr=94.30%, sys=5.41%, ctx=13, majf=0, minf=39 00:36:03.561 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.561 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.561 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.561 filename1: (groupid=0, jobs=1): err= 0: pid=811014: Mon Jul 15 07:04:50 2024 00:36:03.561 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:36:03.561 slat (nsec): min=7008, max=73098, avg=9757.74, stdev=4647.65 00:36:03.561 clat (usec): min=665, max=45903, avg=20980.01, stdev=20209.67 00:36:03.561 lat (usec): min=672, max=45952, avg=20989.77, stdev=20209.85 00:36:03.561 clat percentiles (usec): 00:36:03.561 | 1.00th=[ 717], 5.00th=[ 734], 10.00th=[ 742], 20.00th=[ 766], 00:36:03.561 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 955], 60.00th=[41157], 00:36:03.561 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:03.561 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:36:03.561 | 99.99th=[45876] 00:36:03.561 bw ( KiB/s): min= 704, max= 768, per=50.08%, avg=761.26, stdev=20.18, samples=19 00:36:03.561 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:36:03.561 lat (usec) : 750=13.55%, 1000=36.45% 00:36:03.561 lat (msec) : 50=50.00% 00:36:03.561 cpu : usr=94.15%, sys=5.54%, ctx=15, majf=0, minf=200 00:36:03.561 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.561 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.561 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.561 00:36:03.561 Run status group 0 (all jobs): 00:36:03.561 READ: bw=1520KiB/s (1556kB/s), 758KiB/s-762KiB/s (776kB/s-780kB/s), io=14.8MiB (15.6MB), run=10001-10003msec 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.561 00:36:03.561 real 0m11.321s 00:36:03.561 user 0m20.261s 00:36:03.561 sys 0m1.383s 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 ************************************ 00:36:03.561 END TEST fio_dif_1_multi_subsystems 00:36:03.561 ************************************ 00:36:03.561 07:04:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:03.561 07:04:50 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:03.561 07:04:50 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 ************************************ 00:36:03.561 START TEST fio_dif_rand_params 00:36:03.561 ************************************ 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 bdev_null0 00:36:03.561 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.561 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.561 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.561 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.561 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.562 [2024-07-15 07:04:51.021908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.562 { 00:36:03.562 "params": { 00:36:03.562 "name": "Nvme$subsystem", 00:36:03.562 "trtype": "$TEST_TRANSPORT", 00:36:03.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.562 "adrfam": "ipv4", 00:36:03.562 "trsvcid": "$NVMF_PORT", 00:36:03.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.562 "hdgst": ${hdgst:-false}, 00:36:03.562 "ddgst": ${ddgst:-false} 00:36:03.562 }, 00:36:03.562 "method": "bdev_nvme_attach_controller" 00:36:03.562 } 00:36:03.562 EOF 00:36:03.562 )") 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:03.562 "params": { 00:36:03.562 "name": "Nvme0", 00:36:03.562 "trtype": "tcp", 00:36:03.562 "traddr": "10.0.0.2", 00:36:03.562 "adrfam": "ipv4", 00:36:03.562 "trsvcid": "4420", 00:36:03.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.562 "hdgst": false, 00:36:03.562 "ddgst": false 00:36:03.562 }, 00:36:03.562 "method": "bdev_nvme_attach_controller" 00:36:03.562 }' 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.562 07:04:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.821 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:03.821 ... 00:36:03.821 fio-3.35 00:36:03.821 Starting 3 threads 00:36:03.821 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.375 00:36:10.375 filename0: (groupid=0, jobs=1): err= 0: pid=812407: Mon Jul 15 07:04:56 2024 00:36:10.375 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(136MiB/5027msec) 00:36:10.375 slat (nsec): min=4824, max=35832, avg=12681.89, stdev=3275.43 00:36:10.375 clat (usec): min=5027, max=87476, avg=13893.07, stdev=11305.84 00:36:10.375 lat (usec): min=5038, max=87488, avg=13905.76, stdev=11305.65 00:36:10.375 clat percentiles (usec): 00:36:10.375 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 8225], 00:36:10.375 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[11207], 60.00th=[12125], 00:36:10.375 | 70.00th=[13042], 80.00th=[13960], 90.00th=[16450], 95.00th=[48497], 00:36:10.375 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56886], 99.95th=[87557], 00:36:10.375 | 99.99th=[87557] 00:36:10.375 bw ( KiB/s): min=17920, max=39936, per=36.39%, avg=27679.10, stdev=6156.63, samples=10 00:36:10.375 iops : min= 140, max= 312, avg=216.20, stdev=48.10, samples=10 00:36:10.375 lat (msec) : 10=40.50%, 20=51.29%, 50=5.17%, 100=3.04% 00:36:10.375 cpu : usr=90.19%, sys=9.33%, ctx=13, majf=0, minf=98 00:36:10.375 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.375 issued rwts: total=1084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.375 filename0: (groupid=0, jobs=1): err= 0: pid=812408: Mon Jul 15 07:04:56 2024 00:36:10.375 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(120MiB/5044msec) 00:36:10.375 slat (usec): min=4, max=110, avg=13.14, stdev= 4.68 00:36:10.375 clat (usec): min=5133, max=89984, avg=15687.52, stdev=13152.27 00:36:10.375 lat (usec): min=5145, max=89992, avg=15700.66, stdev=13152.15 00:36:10.375 clat percentiles (usec): 00:36:10.375 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 8848], 00:36:10.375 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[12649], 00:36:10.375 | 70.00th=[13566], 80.00th=[14746], 90.00th=[47449], 95.00th=[50594], 00:36:10.375 | 99.00th=[54789], 99.50th=[60556], 99.90th=[89654], 99.95th=[89654], 00:36:10.375 | 99.99th=[89654] 00:36:10.375 bw ( KiB/s): min=19200, max=33280, per=32.21%, avg=24494.70, stdev=4191.63, samples=10 00:36:10.375 iops : min= 150, max= 260, avg=191.30, stdev=32.78, samples=10 00:36:10.375 lat (msec) : 10=30.06%, 20=58.87%, 50=5.01%, 100=6.05% 00:36:10.376 cpu : usr=91.10%, sys=8.45%, ctx=17, majf=0, minf=134 00:36:10.376 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.376 issued rwts: total=958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.376 filename0: (groupid=0, jobs=1): err= 0: pid=812409: Mon Jul 15 07:04:56 2024 00:36:10.376 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(119MiB/5024msec) 00:36:10.376 slat (nsec): min=7398, max=36707, avg=12697.65, stdev=3184.25 00:36:10.376 clat (usec): min=5319, max=90404, avg=15765.10, stdev=13186.97 00:36:10.376 lat (usec): min=5331, max=90416, avg=15777.80, stdev=13186.99 00:36:10.376 clat percentiles (usec): 00:36:10.376 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 7439], 20.00th=[ 8848], 00:36:10.376 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11863], 60.00th=[12911], 00:36:10.376 | 70.00th=[13698], 80.00th=[14746], 90.00th=[47973], 95.00th=[50594], 00:36:10.376 | 99.00th=[54264], 99.50th=[56886], 99.90th=[90702], 99.95th=[90702], 00:36:10.376 | 99.99th=[90702] 00:36:10.376 bw ( KiB/s): min=18688, max=33280, per=32.04%, avg=24371.20, stdev=4793.58, samples=10 00:36:10.376 iops : min= 146, max= 260, avg=190.40, stdev=37.45, samples=10 00:36:10.376 lat (msec) : 10=32.36%, 20=56.23%, 50=4.71%, 100=6.70% 00:36:10.376 cpu : usr=90.68%, sys=8.90%, ctx=15, majf=0, minf=78 00:36:10.376 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.376 issued rwts: total=955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.376 00:36:10.376 Run status group 0 (all jobs): 00:36:10.376 READ: bw=74.3MiB/s (77.9MB/s), 23.7MiB/s-27.0MiB/s (24.9MB/s-28.3MB/s), io=375MiB (393MB), run=5024-5044msec 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 bdev_null0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 [2024-07-15 07:04:57.251609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 bdev_null1 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 bdev_null2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:10.376 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.376 { 00:36:10.376 "params": { 00:36:10.376 "name": "Nvme$subsystem", 00:36:10.376 "trtype": "$TEST_TRANSPORT", 00:36:10.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.376 "adrfam": "ipv4", 00:36:10.376 "trsvcid": "$NVMF_PORT", 00:36:10.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.376 "hdgst": ${hdgst:-false}, 00:36:10.376 "ddgst": ${ddgst:-false} 00:36:10.376 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 } 00:36:10.377 EOF 00:36:10.377 )") 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.377 { 00:36:10.377 "params": { 00:36:10.377 "name": "Nvme$subsystem", 00:36:10.377 "trtype": "$TEST_TRANSPORT", 00:36:10.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.377 "adrfam": "ipv4", 00:36:10.377 "trsvcid": "$NVMF_PORT", 00:36:10.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.377 "hdgst": ${hdgst:-false}, 00:36:10.377 "ddgst": ${ddgst:-false} 00:36:10.377 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 } 00:36:10.377 EOF 00:36:10.377 )") 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.377 { 00:36:10.377 "params": { 00:36:10.377 "name": "Nvme$subsystem", 00:36:10.377 "trtype": "$TEST_TRANSPORT", 00:36:10.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.377 "adrfam": "ipv4", 00:36:10.377 "trsvcid": "$NVMF_PORT", 00:36:10.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.377 "hdgst": ${hdgst:-false}, 00:36:10.377 "ddgst": ${ddgst:-false} 00:36:10.377 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 } 00:36:10.377 EOF 00:36:10.377 )") 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:10.377 "params": { 00:36:10.377 "name": "Nvme0", 00:36:10.377 "trtype": "tcp", 00:36:10.377 "traddr": "10.0.0.2", 00:36:10.377 "adrfam": "ipv4", 00:36:10.377 "trsvcid": "4420", 00:36:10.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.377 "hdgst": false, 00:36:10.377 "ddgst": false 00:36:10.377 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 },{ 00:36:10.377 "params": { 00:36:10.377 "name": "Nvme1", 00:36:10.377 "trtype": "tcp", 00:36:10.377 "traddr": "10.0.0.2", 00:36:10.377 "adrfam": "ipv4", 00:36:10.377 "trsvcid": "4420", 00:36:10.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:10.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:10.377 "hdgst": false, 00:36:10.377 "ddgst": false 00:36:10.377 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 },{ 00:36:10.377 "params": { 00:36:10.377 "name": "Nvme2", 00:36:10.377 "trtype": "tcp", 00:36:10.377 "traddr": "10.0.0.2", 00:36:10.377 "adrfam": "ipv4", 00:36:10.377 "trsvcid": "4420", 00:36:10.377 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:10.377 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:10.377 "hdgst": false, 00:36:10.377 "ddgst": false 00:36:10.377 }, 00:36:10.377 "method": "bdev_nvme_attach_controller" 00:36:10.377 }' 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:10.377 07:04:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.377 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:10.377 ... 00:36:10.377 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:10.377 ... 00:36:10.377 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:10.377 ... 00:36:10.377 fio-3.35 00:36:10.377 Starting 24 threads 00:36:10.377 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.575 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813160: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=64, BW=258KiB/s (264kB/s)(2624KiB/10185msec) 00:36:22.575 slat (usec): min=4, max=136, avg=44.36, stdev=22.48 00:36:22.575 clat (msec): min=3, max=405, avg=247.98, stdev=94.63 00:36:22.575 lat (msec): min=3, max=405, avg=248.03, stdev=94.63 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 52], 20.00th=[ 218], 00:36:22.575 | 30.00th=[ 247], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 296], 00:36:22.575 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:36:22.575 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 405], 99.95th=[ 405], 00:36:22.575 | 99.99th=[ 405] 00:36:22.575 bw ( KiB/s): min= 128, max= 880, per=4.17%, avg=256.00, stdev=155.73, samples=20 00:36:22.575 iops : min= 32, max= 220, avg=64.00, stdev=38.93, samples=20 00:36:22.575 lat (msec) : 4=1.83%, 10=3.66%, 20=2.90%, 50=1.37%, 100=2.44% 00:36:22.575 lat (msec) : 250=18.60%, 500=69.21% 00:36:22.575 cpu : usr=97.51%, sys=1.74%, ctx=30, majf=0, minf=9 00:36:22.575 IO depths : 1=2.9%, 2=8.4%, 4=22.4%, 8=56.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=93.8%, 8=1.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813161: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10153msec) 00:36:22.575 slat (usec): min=13, max=150, avg=47.96, stdev=28.57 00:36:22.575 clat (msec): min=120, max=428, avg=281.60, stdev=46.99 00:36:22.575 lat (msec): min=120, max=428, avg=281.65, stdev=47.00 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 121], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 266], 00:36:22.575 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 288], 60.00th=[ 296], 00:36:22.575 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 359], 00:36:22.575 | 99.00th=[ 409], 99.50th=[ 422], 99.90th=[ 430], 99.95th=[ 430], 00:36:22.575 | 99.99th=[ 430] 00:36:22.575 bw ( KiB/s): min= 128, max= 368, per=3.64%, avg=224.00, stdev=66.28, samples=20 00:36:22.575 iops : min= 32, max= 92, avg=56.00, stdev=16.57, samples=20 00:36:22.575 lat (msec) : 250=19.10%, 500=80.90% 00:36:22.575 cpu : usr=97.46%, sys=1.64%, ctx=74, majf=0, minf=9 00:36:22.575 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813162: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10158msec) 00:36:22.575 slat (nsec): min=8712, max=66746, avg=23447.59, stdev=11797.40 00:36:22.575 clat (msec): min=158, max=407, avg=267.10, stdev=47.04 00:36:22.575 lat (msec): min=158, max=407, avg=267.13, stdev=47.04 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 203], 20.00th=[ 215], 00:36:22.575 | 30.00th=[ 239], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 292], 00:36:22.575 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 317], 00:36:22.575 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 409], 99.95th=[ 409], 00:36:22.575 | 99.99th=[ 409] 00:36:22.575 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=236.80, stdev=62.64, samples=20 00:36:22.575 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:36:22.575 lat (msec) : 250=30.92%, 500=69.08% 00:36:22.575 cpu : usr=98.31%, sys=1.33%, ctx=21, majf=0, minf=9 00:36:22.575 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813163: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=72, BW=291KiB/s (298kB/s)(2960KiB/10157msec) 00:36:22.575 slat (nsec): min=8095, max=60462, avg=17468.42, stdev=10742.64 00:36:22.575 clat (msec): min=162, max=347, avg=219.24, stdev=38.44 00:36:22.575 lat (msec): min=162, max=347, avg=219.26, stdev=38.44 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 163], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 186], 00:36:22.575 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 218], 00:36:22.575 | 70.00th=[ 228], 80.00th=[ 255], 90.00th=[ 284], 95.00th=[ 305], 00:36:22.575 | 99.00th=[ 309], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:36:22.575 | 99.99th=[ 347] 00:36:22.575 bw ( KiB/s): min= 144, max= 384, per=4.72%, avg=289.60, stdev=59.84, samples=20 00:36:22.575 iops : min= 36, max= 96, avg=72.40, stdev=14.96, samples=20 00:36:22.575 lat (msec) : 250=79.19%, 500=20.81% 00:36:22.575 cpu : usr=97.65%, sys=1.60%, ctx=32, majf=0, minf=9 00:36:22.575 IO depths : 1=1.6%, 2=6.5%, 4=20.8%, 8=60.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813164: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=83, BW=332KiB/s (340kB/s)(3384KiB/10179msec) 00:36:22.575 slat (usec): min=5, max=104, avg=34.86, stdev=30.65 00:36:22.575 clat (msec): min=18, max=319, avg=191.55, stdev=46.58 00:36:22.575 lat (msec): min=18, max=319, avg=191.58, stdev=46.58 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 19], 5.00th=[ 63], 10.00th=[ 163], 20.00th=[ 178], 00:36:22.575 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 201], 60.00th=[ 205], 00:36:22.575 | 70.00th=[ 209], 80.00th=[ 220], 90.00th=[ 230], 95.00th=[ 236], 00:36:22.575 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 321], 00:36:22.575 | 99.99th=[ 321] 00:36:22.575 bw ( KiB/s): min= 256, max= 625, per=5.42%, avg=332.05, stdev=78.21, samples=20 00:36:22.575 iops : min= 64, max= 156, avg=83.00, stdev=19.50, samples=20 00:36:22.575 lat (msec) : 20=1.89%, 50=1.89%, 100=1.89%, 250=90.78%, 500=3.55% 00:36:22.575 cpu : usr=98.05%, sys=1.39%, ctx=20, majf=0, minf=9 00:36:22.575 IO depths : 1=0.6%, 2=1.8%, 4=9.6%, 8=76.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=89.7%, 8=5.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813165: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10144msec) 00:36:22.575 slat (usec): min=4, max=126, avg=29.05, stdev=13.86 00:36:22.575 clat (msec): min=173, max=407, avg=289.58, stdev=40.07 00:36:22.575 lat (msec): min=173, max=407, avg=289.61, stdev=40.06 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 209], 5.00th=[ 218], 10.00th=[ 230], 20.00th=[ 259], 00:36:22.575 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:36:22.575 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 321], 95.00th=[ 372], 00:36:22.575 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:36:22.575 | 99.99th=[ 409] 00:36:22.575 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=215.58, stdev=59.48, samples=19 00:36:22.575 iops : min= 32, max= 64, avg=53.89, stdev=14.87, samples=19 00:36:22.575 lat (msec) : 250=16.43%, 500=83.57% 00:36:22.575 cpu : usr=98.47%, sys=1.14%, ctx=13, majf=0, minf=9 00:36:22.575 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.575 filename0: (groupid=0, jobs=1): err= 0: pid=813166: Mon Jul 15 07:05:08 2024 00:36:22.575 read: IOPS=75, BW=303KiB/s (311kB/s)(3080KiB/10157msec) 00:36:22.575 slat (nsec): min=8098, max=96350, avg=18401.94, stdev=20331.05 00:36:22.575 clat (msec): min=135, max=327, avg=210.52, stdev=37.59 00:36:22.575 lat (msec): min=135, max=327, avg=210.54, stdev=37.59 00:36:22.575 clat percentiles (msec): 00:36:22.575 | 1.00th=[ 140], 5.00th=[ 148], 10.00th=[ 167], 20.00th=[ 184], 00:36:22.575 | 30.00th=[ 188], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 215], 00:36:22.575 | 70.00th=[ 222], 80.00th=[ 234], 90.00th=[ 266], 95.00th=[ 284], 00:36:22.575 | 99.00th=[ 305], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:36:22.575 | 99.99th=[ 330] 00:36:22.575 bw ( KiB/s): min= 224, max= 384, per=4.92%, avg=301.60, stdev=45.63, samples=20 00:36:22.575 iops : min= 56, max= 96, avg=75.40, stdev=11.41, samples=20 00:36:22.575 lat (msec) : 250=86.49%, 500=13.51% 00:36:22.575 cpu : usr=97.68%, sys=1.51%, ctx=70, majf=0, minf=9 00:36:22.575 IO depths : 1=1.3%, 2=3.0%, 4=10.8%, 8=73.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:36:22.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.575 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename0: (groupid=0, jobs=1): err= 0: pid=813167: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10143msec) 00:36:22.576 slat (nsec): min=7150, max=51162, avg=23875.63, stdev=8421.92 00:36:22.576 clat (msec): min=120, max=451, avg=281.54, stdev=53.60 00:36:22.576 lat (msec): min=120, max=451, avg=281.57, stdev=53.60 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 121], 5.00th=[ 176], 10.00th=[ 218], 20.00th=[ 266], 00:36:22.576 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 300], 00:36:22.576 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 351], 00:36:22.576 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:36:22.576 | 99.99th=[ 451] 00:36:22.576 bw ( KiB/s): min= 128, max= 368, per=3.63%, avg=222.32, stdev=67.44, samples=19 00:36:22.576 iops : min= 32, max= 92, avg=55.58, stdev=16.86, samples=19 00:36:22.576 lat (msec) : 250=17.71%, 500=82.29% 00:36:22.576 cpu : usr=98.21%, sys=1.42%, ctx=16, majf=0, minf=9 00:36:22.576 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813168: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10100msec) 00:36:22.576 slat (usec): min=20, max=102, avg=70.44, stdev=13.95 00:36:22.576 clat (msec): min=170, max=395, avg=287.95, stdev=31.55 00:36:22.576 lat (msec): min=171, max=395, avg=288.02, stdev=31.55 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 192], 5.00th=[ 226], 10.00th=[ 245], 20.00th=[ 271], 00:36:22.576 | 30.00th=[ 284], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 296], 00:36:22.576 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 334], 00:36:22.576 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 397], 99.95th=[ 397], 00:36:22.576 | 99.99th=[ 397] 00:36:22.576 bw ( KiB/s): min= 128, max= 256, per=3.63%, avg=222.32, stdev=52.50, samples=19 00:36:22.576 iops : min= 32, max= 64, avg=55.58, stdev=13.12, samples=19 00:36:22.576 lat (msec) : 250=15.00%, 500=85.00% 00:36:22.576 cpu : usr=98.12%, sys=1.32%, ctx=52, majf=0, minf=9 00:36:22.576 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813169: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=75, BW=300KiB/s (307kB/s)(3056KiB/10177msec) 00:36:22.576 slat (usec): min=5, max=108, avg=31.37, stdev=30.54 00:36:22.576 clat (msec): min=17, max=405, avg=211.58, stdev=62.41 00:36:22.576 lat (msec): min=17, max=405, avg=211.61, stdev=62.43 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 18], 5.00th=[ 63], 10.00th=[ 163], 20.00th=[ 178], 00:36:22.576 | 30.00th=[ 192], 40.00th=[ 205], 50.00th=[ 215], 60.00th=[ 224], 00:36:22.576 | 70.00th=[ 232], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 288], 00:36:22.576 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:36:22.576 | 99.99th=[ 405] 00:36:22.576 bw ( KiB/s): min= 144, max= 624, per=4.87%, avg=298.60, stdev=95.46, samples=20 00:36:22.576 iops : min= 36, max= 156, avg=74.65, stdev=23.86, samples=20 00:36:22.576 lat (msec) : 20=2.09%, 50=2.09%, 100=2.09%, 250=70.94%, 500=22.77% 00:36:22.576 cpu : usr=98.19%, sys=1.39%, ctx=36, majf=0, minf=9 00:36:22.576 IO depths : 1=1.8%, 2=6.2%, 4=19.1%, 8=62.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813170: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10092msec) 00:36:22.576 slat (nsec): min=8086, max=47743, avg=14692.83, stdev=7338.51 00:36:22.576 clat (msec): min=188, max=397, avg=288.19, stdev=33.89 00:36:22.576 lat (msec): min=188, max=397, avg=288.20, stdev=33.89 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 194], 5.00th=[ 218], 10.00th=[ 232], 20.00th=[ 271], 00:36:22.576 | 30.00th=[ 284], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 296], 00:36:22.576 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 338], 00:36:22.576 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:36:22.576 | 99.99th=[ 397] 00:36:22.576 bw ( KiB/s): min= 128, max= 256, per=3.61%, avg=221.47, stdev=55.75, samples=19 00:36:22.576 iops : min= 32, max= 64, avg=55.37, stdev=13.94, samples=19 00:36:22.576 lat (msec) : 250=16.43%, 500=83.57% 00:36:22.576 cpu : usr=97.45%, sys=1.50%, ctx=84, majf=0, minf=9 00:36:22.576 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813171: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=70, BW=280KiB/s (287kB/s)(2848KiB/10157msec) 00:36:22.576 slat (usec): min=8, max=100, avg=31.68, stdev=29.62 00:36:22.576 clat (msec): min=143, max=366, avg=226.63, stdev=43.00 00:36:22.576 lat (msec): min=143, max=366, avg=226.66, stdev=43.02 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 146], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 188], 00:36:22.576 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 218], 60.00th=[ 222], 00:36:22.576 | 70.00th=[ 243], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 305], 00:36:22.576 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:36:22.576 | 99.99th=[ 368] 00:36:22.576 bw ( KiB/s): min= 128, max= 384, per=4.54%, avg=278.40, stdev=51.23, samples=20 00:36:22.576 iops : min= 32, max= 96, avg=69.60, stdev=12.81, samples=20 00:36:22.576 lat (msec) : 250=71.07%, 500=28.93% 00:36:22.576 cpu : usr=98.32%, sys=1.29%, ctx=16, majf=0, minf=9 00:36:22.576 IO depths : 1=1.3%, 2=4.1%, 4=14.5%, 8=68.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813173: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=72, BW=290KiB/s (297kB/s)(2944KiB/10157msec) 00:36:22.576 slat (nsec): min=8193, max=60017, avg=18728.39, stdev=11156.29 00:36:22.576 clat (msec): min=149, max=352, avg=220.26, stdev=40.29 00:36:22.576 lat (msec): min=149, max=352, avg=220.28, stdev=40.29 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 188], 00:36:22.576 | 30.00th=[ 203], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 220], 00:36:22.576 | 70.00th=[ 230], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 300], 00:36:22.576 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:36:22.576 | 99.99th=[ 351] 00:36:22.576 bw ( KiB/s): min= 144, max= 384, per=4.70%, avg=288.00, stdev=59.19, samples=20 00:36:22.576 iops : min= 36, max= 96, avg=72.00, stdev=14.80, samples=20 00:36:22.576 lat (msec) : 250=77.45%, 500=22.55% 00:36:22.576 cpu : usr=98.33%, sys=1.25%, ctx=17, majf=0, minf=9 00:36:22.576 IO depths : 1=1.6%, 2=5.3%, 4=16.8%, 8=65.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813174: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=74, BW=298KiB/s (305kB/s)(3032KiB/10179msec) 00:36:22.576 slat (usec): min=5, max=167, avg=28.95, stdev=30.03 00:36:22.576 clat (msec): min=9, max=384, avg=214.26, stdev=74.28 00:36:22.576 lat (msec): min=9, max=384, avg=214.29, stdev=74.29 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 148], 20.00th=[ 171], 00:36:22.576 | 30.00th=[ 192], 40.00th=[ 209], 50.00th=[ 218], 60.00th=[ 230], 00:36:22.576 | 70.00th=[ 268], 80.00th=[ 279], 90.00th=[ 296], 95.00th=[ 309], 00:36:22.576 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:36:22.576 | 99.99th=[ 384] 00:36:22.576 bw ( KiB/s): min= 128, max= 752, per=4.84%, avg=296.80, stdev=124.31, samples=20 00:36:22.576 iops : min= 32, max= 188, avg=74.20, stdev=31.08, samples=20 00:36:22.576 lat (msec) : 10=2.11%, 20=2.11%, 50=2.11%, 100=2.11%, 250=61.48% 00:36:22.576 lat (msec) : 500=30.08% 00:36:22.576 cpu : usr=96.84%, sys=1.90%, ctx=78, majf=0, minf=9 00:36:22.576 IO depths : 1=1.7%, 2=5.4%, 4=16.9%, 8=64.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.576 filename1: (groupid=0, jobs=1): err= 0: pid=813175: Mon Jul 15 07:05:08 2024 00:36:22.576 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10145msec) 00:36:22.576 slat (usec): min=6, max=100, avg=29.75, stdev=13.42 00:36:22.576 clat (msec): min=174, max=408, avg=289.58, stdev=40.92 00:36:22.576 lat (msec): min=174, max=408, avg=289.61, stdev=40.91 00:36:22.576 clat percentiles (msec): 00:36:22.576 | 1.00th=[ 197], 5.00th=[ 218], 10.00th=[ 230], 20.00th=[ 259], 00:36:22.576 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:36:22.576 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 321], 95.00th=[ 376], 00:36:22.576 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:36:22.576 | 99.99th=[ 409] 00:36:22.576 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=215.58, stdev=56.03, samples=19 00:36:22.576 iops : min= 32, max= 64, avg=53.89, stdev=14.01, samples=19 00:36:22.576 lat (msec) : 250=16.07%, 500=83.93% 00:36:22.576 cpu : usr=96.01%, sys=2.43%, ctx=88, majf=0, minf=9 00:36:22.576 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:22.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.576 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename1: (groupid=0, jobs=1): err= 0: pid=813176: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10159msec) 00:36:22.577 slat (usec): min=16, max=118, avg=75.17, stdev=18.91 00:36:22.577 clat (msec): min=162, max=405, avg=281.53, stdev=38.73 00:36:22.577 lat (msec): min=162, max=406, avg=281.61, stdev=38.74 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 197], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 247], 00:36:22.577 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 296], 00:36:22.577 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:36:22.577 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 405], 99.95th=[ 405], 00:36:22.577 | 99.99th=[ 405] 00:36:22.577 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=224.00, stdev=55.18, samples=20 00:36:22.577 iops : min= 32, max= 64, avg=56.00, stdev=13.80, samples=20 00:36:22.577 lat (msec) : 250=21.18%, 500=78.82% 00:36:22.577 cpu : usr=98.25%, sys=1.30%, ctx=9, majf=0, minf=9 00:36:22.577 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813177: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10144msec) 00:36:22.577 slat (nsec): min=3836, max=89215, avg=18943.23, stdev=10061.94 00:36:22.577 clat (msec): min=171, max=432, avg=281.60, stdev=43.36 00:36:22.577 lat (msec): min=171, max=432, avg=281.62, stdev=43.35 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 218], 20.00th=[ 249], 00:36:22.577 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 292], 00:36:22.577 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 321], 95.00th=[ 338], 00:36:22.577 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 435], 00:36:22.577 | 99.99th=[ 435] 00:36:22.577 bw ( KiB/s): min= 128, max= 272, per=3.63%, avg=222.32, stdev=56.42, samples=19 00:36:22.577 iops : min= 32, max= 68, avg=55.58, stdev=14.10, samples=19 00:36:22.577 lat (msec) : 250=21.18%, 500=78.82% 00:36:22.577 cpu : usr=97.43%, sys=1.72%, ctx=54, majf=0, minf=9 00:36:22.577 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813178: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10157msec) 00:36:22.577 slat (usec): min=7, max=117, avg=76.09, stdev=16.60 00:36:22.577 clat (msec): min=173, max=405, avg=281.49, stdev=34.70 00:36:22.577 lat (msec): min=173, max=405, avg=281.57, stdev=34.71 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 209], 5.00th=[ 213], 10.00th=[ 228], 20.00th=[ 255], 00:36:22.577 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 296], 00:36:22.577 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 317], 00:36:22.577 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 405], 99.95th=[ 405], 00:36:22.577 | 99.99th=[ 405] 00:36:22.577 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=224.00, stdev=56.87, samples=20 00:36:22.577 iops : min= 32, max= 64, avg=56.00, stdev=14.22, samples=20 00:36:22.577 lat (msec) : 250=18.75%, 500=81.25% 00:36:22.577 cpu : usr=98.04%, sys=1.53%, ctx=11, majf=0, minf=9 00:36:22.577 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813179: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10143msec) 00:36:22.577 slat (nsec): min=8475, max=89661, avg=19747.17, stdev=11614.46 00:36:22.577 clat (msec): min=120, max=461, avg=289.59, stdev=51.00 00:36:22.577 lat (msec): min=120, max=461, avg=289.61, stdev=51.00 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 232], 20.00th=[ 275], 00:36:22.577 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 288], 60.00th=[ 300], 00:36:22.577 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 393], 00:36:22.577 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:36:22.577 | 99.99th=[ 464] 00:36:22.577 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=215.58, stdev=59.48, samples=19 00:36:22.577 iops : min= 32, max= 64, avg=53.89, stdev=14.87, samples=19 00:36:22.577 lat (msec) : 250=13.57%, 500=86.43% 00:36:22.577 cpu : usr=97.81%, sys=1.54%, ctx=35, majf=0, minf=9 00:36:22.577 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813180: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=83, BW=335KiB/s (343kB/s)(3408KiB/10181msec) 00:36:22.577 slat (usec): min=3, max=190, avg=17.85, stdev=18.91 00:36:22.577 clat (msec): min=8, max=306, avg=190.71, stdev=56.08 00:36:22.577 lat (msec): min=8, max=306, avg=190.73, stdev=56.08 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 142], 20.00th=[ 174], 00:36:22.577 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 201], 60.00th=[ 207], 00:36:22.577 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 239], 95.00th=[ 259], 00:36:22.577 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:36:22.577 | 99.99th=[ 309] 00:36:22.577 bw ( KiB/s): min= 256, max= 769, per=5.46%, avg=334.45, stdev=111.17, samples=20 00:36:22.577 iops : min= 64, max= 192, avg=83.60, stdev=27.74, samples=20 00:36:22.577 lat (msec) : 10=1.06%, 20=3.52%, 50=1.06%, 100=1.88%, 250=84.27% 00:36:22.577 lat (msec) : 500=8.22% 00:36:22.577 cpu : usr=98.17%, sys=1.39%, ctx=25, majf=0, minf=9 00:36:22.577 IO depths : 1=1.2%, 2=3.2%, 4=11.7%, 8=72.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=90.3%, 8=4.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813181: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10143msec) 00:36:22.577 slat (usec): min=15, max=158, avg=58.37, stdev=34.11 00:36:22.577 clat (msec): min=161, max=460, avg=289.34, stdev=40.35 00:36:22.577 lat (msec): min=161, max=460, avg=289.39, stdev=40.35 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 197], 5.00th=[ 218], 10.00th=[ 232], 20.00th=[ 259], 00:36:22.577 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:36:22.577 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 372], 00:36:22.577 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 460], 99.95th=[ 460], 00:36:22.577 | 99.99th=[ 460] 00:36:22.577 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=215.58, stdev=57.78, samples=19 00:36:22.577 iops : min= 32, max= 64, avg=53.89, stdev=14.44, samples=19 00:36:22.577 lat (msec) : 250=15.71%, 500=84.29% 00:36:22.577 cpu : usr=97.76%, sys=1.48%, ctx=73, majf=0, minf=9 00:36:22.577 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813183: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10008msec) 00:36:22.577 slat (usec): min=3, max=130, avg=73.34, stdev=21.33 00:36:22.577 clat (msec): min=7, max=323, avg=255.95, stdev=83.05 00:36:22.577 lat (msec): min=8, max=323, avg=256.02, stdev=83.06 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 63], 20.00th=[ 232], 00:36:22.577 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 292], 00:36:22.577 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 321], 00:36:22.577 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:36:22.577 | 99.99th=[ 326] 00:36:22.577 bw ( KiB/s): min= 128, max= 641, per=3.97%, avg=243.25, stdev=116.90, samples=20 00:36:22.577 iops : min= 32, max= 160, avg=60.80, stdev=29.18, samples=20 00:36:22.577 lat (msec) : 10=2.56%, 20=2.56%, 50=2.56%, 100=2.56%, 250=12.82% 00:36:22.577 lat (msec) : 500=76.92% 00:36:22.577 cpu : usr=98.24%, sys=1.22%, ctx=29, majf=0, minf=9 00:36:22.577 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813184: Mon Jul 15 07:05:08 2024 00:36:22.577 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10154msec) 00:36:22.577 slat (usec): min=5, max=145, avg=50.23, stdev=28.12 00:36:22.577 clat (msec): min=120, max=451, avg=281.67, stdev=57.96 00:36:22.577 lat (msec): min=120, max=451, avg=281.72, stdev=57.97 00:36:22.577 clat percentiles (msec): 00:36:22.577 | 1.00th=[ 122], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 266], 00:36:22.577 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 300], 00:36:22.577 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 363], 00:36:22.577 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:36:22.577 | 99.99th=[ 451] 00:36:22.577 bw ( KiB/s): min= 128, max= 368, per=3.64%, avg=224.00, stdev=64.63, samples=20 00:36:22.577 iops : min= 32, max= 92, avg=56.00, stdev=16.16, samples=20 00:36:22.577 lat (msec) : 250=18.75%, 500=81.25% 00:36:22.577 cpu : usr=97.27%, sys=1.63%, ctx=33, majf=0, minf=9 00:36:22.577 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:22.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.577 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.577 filename2: (groupid=0, jobs=1): err= 0: pid=813185: Mon Jul 15 07:05:08 2024 00:36:22.578 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10158msec) 00:36:22.578 slat (nsec): min=7118, max=66866, avg=18386.57, stdev=11400.02 00:36:22.578 clat (msec): min=161, max=382, avg=230.69, stdev=43.25 00:36:22.578 lat (msec): min=161, max=382, avg=230.71, stdev=43.25 00:36:22.578 clat percentiles (msec): 00:36:22.578 | 1.00th=[ 165], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 197], 00:36:22.578 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 224], 60.00th=[ 230], 00:36:22.578 | 70.00th=[ 255], 80.00th=[ 279], 90.00th=[ 300], 95.00th=[ 305], 00:36:22.578 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:36:22.578 | 99.99th=[ 384] 00:36:22.578 bw ( KiB/s): min= 144, max= 384, per=4.49%, avg=275.20, stdev=67.00, samples=20 00:36:22.578 iops : min= 36, max= 96, avg=68.80, stdev=16.75, samples=20 00:36:22.578 lat (msec) : 250=69.32%, 500=30.68% 00:36:22.578 cpu : usr=98.24%, sys=1.39%, ctx=20, majf=0, minf=9 00:36:22.578 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:22.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.578 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.578 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:22.578 00:36:22.578 Run status group 0 (all jobs): 00:36:22.578 READ: bw=6121KiB/s (6268kB/s), 221KiB/s-335KiB/s (226kB/s-343kB/s), io=60.9MiB (63.8MB), run=10008-10185msec 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 bdev_null0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 [2024-07-15 07:05:09.118038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 bdev_null1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:22.578 { 00:36:22.578 "params": { 00:36:22.578 "name": "Nvme$subsystem", 00:36:22.578 "trtype": "$TEST_TRANSPORT", 00:36:22.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.578 "adrfam": "ipv4", 00:36:22.578 "trsvcid": "$NVMF_PORT", 00:36:22.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.578 "hdgst": ${hdgst:-false}, 00:36:22.578 "ddgst": ${ddgst:-false} 00:36:22.578 }, 00:36:22.578 "method": "bdev_nvme_attach_controller" 00:36:22.578 } 00:36:22.578 EOF 00:36:22.578 )") 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:22.578 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:22.579 { 00:36:22.579 "params": { 00:36:22.579 "name": "Nvme$subsystem", 00:36:22.579 "trtype": "$TEST_TRANSPORT", 00:36:22.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.579 "adrfam": "ipv4", 00:36:22.579 "trsvcid": "$NVMF_PORT", 00:36:22.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.579 "hdgst": ${hdgst:-false}, 00:36:22.579 "ddgst": ${ddgst:-false} 00:36:22.579 }, 00:36:22.579 "method": "bdev_nvme_attach_controller" 00:36:22.579 } 00:36:22.579 EOF 00:36:22.579 )") 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:22.579 "params": { 00:36:22.579 "name": "Nvme0", 00:36:22.579 "trtype": "tcp", 00:36:22.579 "traddr": "10.0.0.2", 00:36:22.579 "adrfam": "ipv4", 00:36:22.579 "trsvcid": "4420", 00:36:22.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.579 "hdgst": false, 00:36:22.579 "ddgst": false 00:36:22.579 }, 00:36:22.579 "method": "bdev_nvme_attach_controller" 00:36:22.579 },{ 00:36:22.579 "params": { 00:36:22.579 "name": "Nvme1", 00:36:22.579 "trtype": "tcp", 00:36:22.579 "traddr": "10.0.0.2", 00:36:22.579 "adrfam": "ipv4", 00:36:22.579 "trsvcid": "4420", 00:36:22.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:22.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:22.579 "hdgst": false, 00:36:22.579 "ddgst": false 00:36:22.579 }, 00:36:22.579 "method": "bdev_nvme_attach_controller" 00:36:22.579 }' 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:22.579 07:05:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.579 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:22.579 ... 00:36:22.579 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:22.579 ... 00:36:22.579 fio-3.35 00:36:22.579 Starting 4 threads 00:36:22.579 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.846 00:36:27.846 filename0: (groupid=0, jobs=1): err= 0: pid=814677: Mon Jul 15 07:05:15 2024 00:36:27.846 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5002msec) 00:36:27.846 slat (nsec): min=4528, max=38229, avg=13529.84, stdev=4763.93 00:36:27.846 clat (usec): min=1000, max=7691, avg=4204.21, stdev=378.49 00:36:27.846 lat (usec): min=1014, max=7706, avg=4217.74, stdev=378.67 00:36:27.846 clat percentiles (usec): 00:36:27.846 | 1.00th=[ 2835], 5.00th=[ 3556], 10.00th=[ 3949], 20.00th=[ 4178], 00:36:27.846 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:27.846 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:36:27.846 | 99.00th=[ 5407], 99.50th=[ 6194], 99.90th=[ 6783], 99.95th=[ 7177], 00:36:27.846 | 99.99th=[ 7701] 00:36:27.846 bw ( KiB/s): min=14720, max=16480, per=25.34%, avg=15057.60, stdev=528.11, samples=10 00:36:27.846 iops : min= 1840, max= 2060, avg=1882.20, stdev=66.01, samples=10 00:36:27.846 lat (msec) : 2=0.17%, 4=10.90%, 10=88.93% 00:36:27.846 cpu : usr=90.58%, sys=7.30%, ctx=254, majf=0, minf=0 00:36:27.846 IO depths : 1=0.1%, 2=15.3%, 4=58.6%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 issued rwts: total=9413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:27.846 filename0: (groupid=0, jobs=1): err= 0: pid=814678: Mon Jul 15 07:05:15 2024 00:36:27.846 read: IOPS=1845, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5003msec) 00:36:27.846 slat (nsec): min=4509, max=37019, avg=14970.92, stdev=4045.85 00:36:27.846 clat (usec): min=833, max=7667, avg=4278.60, stdev=424.77 00:36:27.846 lat (usec): min=847, max=7683, avg=4293.57, stdev=424.58 00:36:27.846 clat percentiles (usec): 00:36:27.846 | 1.00th=[ 3261], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4178], 00:36:27.846 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:27.846 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4359], 95.00th=[ 4621], 00:36:27.846 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7570], 00:36:27.846 | 99.99th=[ 7701] 00:36:27.846 bw ( KiB/s): min=13952, max=14976, per=24.85%, avg=14763.20, stdev=298.96, samples=10 00:36:27.846 iops : min= 1744, max= 1872, avg=1845.40, stdev=37.37, samples=10 00:36:27.846 lat (usec) : 1000=0.03% 00:36:27.846 lat (msec) : 2=0.22%, 4=5.79%, 10=93.96% 00:36:27.846 cpu : usr=90.50%, sys=7.52%, ctx=119, majf=0, minf=9 00:36:27.846 IO depths : 1=0.4%, 2=15.6%, 4=58.3%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 issued rwts: total=9235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:27.846 filename1: (groupid=0, jobs=1): err= 0: pid=814679: Mon Jul 15 07:05:15 2024 00:36:27.846 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:36:27.846 slat (nsec): min=4964, max=41832, avg=15732.51, stdev=5322.11 00:36:27.846 clat (usec): min=954, max=8417, avg=4258.73, stdev=446.38 00:36:27.846 lat (usec): min=968, max=8431, avg=4274.46, stdev=446.48 00:36:27.846 clat percentiles (usec): 00:36:27.846 | 1.00th=[ 2966], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4178], 00:36:27.846 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:27.846 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4621], 00:36:27.846 | 99.00th=[ 6259], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 8356], 00:36:27.846 | 99.99th=[ 8455] 00:36:27.846 bw ( KiB/s): min=14480, max=15056, per=24.92%, avg=14807.60, stdev=197.86, samples=10 00:36:27.846 iops : min= 1810, max= 1882, avg=1850.90, stdev=24.82, samples=10 00:36:27.846 lat (usec) : 1000=0.02% 00:36:27.846 lat (msec) : 2=0.32%, 4=6.12%, 10=93.53% 00:36:27.846 cpu : usr=91.04%, sys=6.94%, ctx=195, majf=0, minf=9 00:36:27.846 IO depths : 1=0.1%, 2=19.7%, 4=54.0%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 issued rwts: total=9261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:27.846 filename1: (groupid=0, jobs=1): err= 0: pid=814680: Mon Jul 15 07:05:15 2024 00:36:27.846 read: IOPS=1849, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5001msec) 00:36:27.846 slat (nsec): min=4387, max=50172, avg=15103.52, stdev=4313.53 00:36:27.846 clat (usec): min=724, max=8348, avg=4268.80, stdev=410.72 00:36:27.846 lat (usec): min=738, max=8359, avg=4283.90, stdev=410.47 00:36:27.846 clat percentiles (usec): 00:36:27.846 | 1.00th=[ 3261], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4178], 00:36:27.846 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:27.846 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4555], 00:36:27.846 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7439], 00:36:27.846 | 99.99th=[ 8356] 00:36:27.846 bw ( KiB/s): min=14160, max=14976, per=24.90%, avg=14796.50, stdev=232.52, samples=10 00:36:27.846 iops : min= 1770, max= 1872, avg=1849.50, stdev=29.08, samples=10 00:36:27.846 lat (usec) : 750=0.01%, 1000=0.02% 00:36:27.846 lat (msec) : 2=0.26%, 4=5.61%, 10=94.10% 00:36:27.846 cpu : usr=88.82%, sys=7.98%, ctx=880, majf=0, minf=0 00:36:27.846 IO depths : 1=0.1%, 2=20.6%, 4=53.6%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.846 issued rwts: total=9247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:27.846 00:36:27.846 Run status group 0 (all jobs): 00:36:27.846 READ: bw=58.0MiB/s (60.8MB/s), 14.4MiB/s-14.7MiB/s (15.1MB/s-15.4MB/s), io=290MiB (304MB), run=5001-5003msec 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.846 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.847 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.847 00:36:27.847 real 0m24.395s 00:36:27.847 user 4m34.658s 00:36:27.847 sys 0m7.393s 00:36:27.847 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:27.847 07:05:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.847 ************************************ 00:36:27.847 END TEST fio_dif_rand_params 00:36:27.847 ************************************ 00:36:27.847 07:05:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:27.847 07:05:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:27.847 07:05:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:27.847 07:05:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.847 ************************************ 00:36:27.847 START TEST fio_dif_digest 00:36:27.847 ************************************ 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.847 bdev_null0 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.847 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 [2024-07-15 07:05:15.464709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:28.105 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:28.106 { 00:36:28.106 "params": { 00:36:28.106 "name": "Nvme$subsystem", 00:36:28.106 "trtype": "$TEST_TRANSPORT", 00:36:28.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.106 "adrfam": "ipv4", 00:36:28.106 "trsvcid": "$NVMF_PORT", 00:36:28.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.106 "hdgst": ${hdgst:-false}, 00:36:28.106 "ddgst": ${ddgst:-false} 00:36:28.106 }, 00:36:28.106 "method": "bdev_nvme_attach_controller" 00:36:28.106 } 00:36:28.106 EOF 00:36:28.106 )") 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:28.106 "params": { 00:36:28.106 "name": "Nvme0", 00:36:28.106 "trtype": "tcp", 00:36:28.106 "traddr": "10.0.0.2", 00:36:28.106 "adrfam": "ipv4", 00:36:28.106 "trsvcid": "4420", 00:36:28.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.106 "hdgst": true, 00:36:28.106 "ddgst": true 00:36:28.106 }, 00:36:28.106 "method": "bdev_nvme_attach_controller" 00:36:28.106 }' 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:28.106 07:05:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.365 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:28.365 ... 00:36:28.365 fio-3.35 00:36:28.365 Starting 3 threads 00:36:28.365 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.572 00:36:40.572 filename0: (groupid=0, jobs=1): err= 0: pid=815433: Mon Jul 15 07:05:26 2024 00:36:40.572 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(260MiB/10008msec) 00:36:40.572 slat (nsec): min=4344, max=41243, avg=16960.33, stdev=3787.35 00:36:40.572 clat (usec): min=7697, max=56853, avg=14419.34, stdev=2415.78 00:36:40.572 lat (usec): min=7712, max=56870, avg=14436.30, stdev=2415.65 00:36:40.572 clat percentiles (usec): 00:36:40.572 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:36:40.572 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:36:40.572 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:36:40.572 | 99.00th=[17171], 99.50th=[17695], 99.90th=[54789], 99.95th=[56886], 00:36:40.572 | 99.99th=[56886] 00:36:40.572 bw ( KiB/s): min=22272, max=27392, per=33.19%, avg=26575.40, stdev=1061.23, samples=20 00:36:40.572 iops : min= 174, max= 214, avg=207.60, stdev= 8.30, samples=20 00:36:40.572 lat (msec) : 10=0.10%, 20=99.47%, 50=0.14%, 100=0.29% 00:36:40.572 cpu : usr=91.13%, sys=7.52%, ctx=235, majf=0, minf=177 00:36:40.572 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.572 filename0: (groupid=0, jobs=1): err= 0: pid=815434: Mon Jul 15 07:05:26 2024 00:36:40.572 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(263MiB/10007msec) 00:36:40.572 slat (nsec): min=4431, max=45196, avg=19082.01, stdev=6103.61 00:36:40.572 clat (usec): min=9100, max=21313, avg=14250.00, stdev=1140.58 00:36:40.572 lat (usec): min=9116, max=21328, avg=14269.08, stdev=1140.35 00:36:40.572 clat percentiles (usec): 00:36:40.572 | 1.00th=[10683], 5.00th=[12518], 10.00th=[13042], 20.00th=[13435], 00:36:40.572 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:36:40.572 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:36:40.572 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21365], 99.95th=[21365], 00:36:40.572 | 99.99th=[21365] 00:36:40.572 bw ( KiB/s): min=25344, max=28928, per=33.57%, avg=26880.00, stdev=770.24, samples=20 00:36:40.572 iops : min= 198, max= 226, avg=210.00, stdev= 6.02, samples=20 00:36:40.572 lat (msec) : 10=0.90%, 20=98.95%, 50=0.14% 00:36:40.572 cpu : usr=87.90%, sys=9.15%, ctx=499, majf=0, minf=104 00:36:40.572 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.572 filename0: (groupid=0, jobs=1): err= 0: pid=815435: Mon Jul 15 07:05:26 2024 00:36:40.572 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10047msec) 00:36:40.572 slat (nsec): min=4528, max=50204, avg=15378.95, stdev=2639.27 00:36:40.572 clat (usec): min=9357, max=50921, avg=14291.59, stdev=1602.06 00:36:40.572 lat (usec): min=9371, max=50940, avg=14306.97, stdev=1602.31 00:36:40.572 clat percentiles (usec): 00:36:40.572 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:36:40.572 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:36:40.572 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:36:40.572 | 99.00th=[16909], 99.50th=[17695], 99.90th=[24249], 99.95th=[47449], 00:36:40.572 | 99.99th=[51119] 00:36:40.572 bw ( KiB/s): min=25344, max=28416, per=33.58%, avg=26892.80, stdev=745.09, samples=20 00:36:40.572 iops : min= 198, max= 222, avg=210.10, stdev= 5.82, samples=20 00:36:40.572 lat (msec) : 10=0.48%, 20=99.33%, 50=0.14%, 100=0.05% 00:36:40.572 cpu : usr=93.30%, sys=6.23%, ctx=25, majf=0, minf=112 00:36:40.572 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.572 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.572 00:36:40.572 Run status group 0 (all jobs): 00:36:40.572 READ: bw=78.2MiB/s (82.0MB/s), 26.0MiB/s-26.3MiB/s (27.2MB/s-27.5MB/s), io=786MiB (824MB), run=10007-10047msec 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.572 00:36:40.572 real 0m11.124s 00:36:40.572 user 0m28.512s 00:36:40.572 sys 0m2.578s 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:40.572 07:05:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:40.572 ************************************ 00:36:40.573 END TEST fio_dif_digest 00:36:40.573 ************************************ 00:36:40.573 07:05:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:40.573 07:05:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:40.573 rmmod nvme_tcp 00:36:40.573 rmmod nvme_fabrics 00:36:40.573 rmmod nvme_keyring 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 809392 ']' 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 809392 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 809392 ']' 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 809392 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 809392 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 809392' 00:36:40.573 killing process with pid 809392 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@965 -- # kill 809392 00:36:40.573 07:05:26 nvmf_dif -- common/autotest_common.sh@970 -- # wait 809392 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:40.573 07:05:26 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:40.573 Waiting for block devices as requested 00:36:40.573 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:40.573 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:40.573 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:40.855 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:40.855 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:40.855 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:40.855 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:41.119 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:41.119 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:41.119 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:41.119 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:41.377 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:41.377 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:41.377 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:41.377 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:41.635 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:41.635 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:41.635 07:05:29 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:41.635 07:05:29 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:41.635 07:05:29 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:41.635 07:05:29 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:41.635 07:05:29 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.635 07:05:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:41.635 07:05:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.166 07:05:31 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:44.166 00:36:44.166 real 1m6.669s 00:36:44.166 user 6m30.754s 00:36:44.166 sys 0m19.078s 00:36:44.166 07:05:31 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:44.167 07:05:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.167 ************************************ 00:36:44.167 END TEST nvmf_dif 00:36:44.167 ************************************ 00:36:44.167 07:05:31 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:44.167 07:05:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:44.167 07:05:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:44.167 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:36:44.167 ************************************ 00:36:44.167 START TEST nvmf_abort_qd_sizes 00:36:44.167 ************************************ 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:44.167 * Looking for test storage... 00:36:44.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:44.167 07:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:46.067 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:46.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:46.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:46.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:46.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:46.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:36:46.068 00:36:46.068 --- 10.0.0.2 ping statistics --- 00:36:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.068 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:36:46.068 00:36:46.068 --- 10.0.0.1 ping statistics --- 00:36:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.068 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:46.068 07:05:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:47.447 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:47.447 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:47.447 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:48.016 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:48.275 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=820222 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 820222 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 820222 ']' 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:48.276 07:05:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.276 [2024-07-15 07:05:35.874584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:48.276 [2024-07-15 07:05:35.874677] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.536 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.536 [2024-07-15 07:05:35.940567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.536 [2024-07-15 07:05:36.030650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.536 [2024-07-15 07:05:36.030722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.536 [2024-07-15 07:05:36.030735] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.536 [2024-07-15 07:05:36.030760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.536 [2024-07-15 07:05:36.030770] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.536 [2024-07-15 07:05:36.030861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.536 [2024-07-15 07:05:36.030927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.536 [2024-07-15 07:05:36.031213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:48.536 [2024-07-15 07:05:36.031216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:48.796 07:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.796 ************************************ 00:36:48.796 START TEST spdk_target_abort 00:36:48.796 ************************************ 00:36:48.796 07:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:48.796 07:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:48.796 07:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:48.796 07:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.796 07:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.087 spdk_targetn1 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.087 [2024-07-15 07:05:39.056731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.087 [2024-07-15 07:05:39.089096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:52.087 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.088 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.088 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.088 07:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.088 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.618 Initializing NVMe Controllers 00:36:54.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.618 Initialization complete. Launching workers. 00:36:54.618 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10557, failed: 0 00:36:54.618 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1196, failed to submit 9361 00:36:54.618 success 780, unsuccess 416, failed 0 00:36:54.618 07:05:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.618 07:05:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.876 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.157 Initializing NVMe Controllers 00:36:58.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.157 Initialization complete. Launching workers. 00:36:58.157 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8496, failed: 0 00:36:58.157 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7251 00:36:58.157 success 306, unsuccess 939, failed 0 00:36:58.157 07:05:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:58.158 07:05:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.158 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.497 Initializing NVMe Controllers 00:37:01.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.497 Initialization complete. Launching workers. 00:37:01.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31711, failed: 0 00:37:01.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2690, failed to submit 29021 00:37:01.497 success 526, unsuccess 2164, failed 0 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.497 07:05:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 820222 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 820222 ']' 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 820222 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 820222 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 820222' 00:37:02.873 killing process with pid 820222 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 820222 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 820222 00:37:02.873 00:37:02.873 real 0m14.195s 00:37:02.873 user 0m53.512s 00:37:02.873 sys 0m2.747s 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.873 ************************************ 00:37:02.873 END TEST spdk_target_abort 00:37:02.873 ************************************ 00:37:02.873 07:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:02.873 07:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:02.873 07:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:02.873 07:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:02.873 ************************************ 00:37:02.873 START TEST kernel_target_abort 00:37:02.873 ************************************ 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.873 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:02.874 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:03.134 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:03.134 07:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:04.071 Waiting for block devices as requested 00:37:04.071 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:04.071 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:04.328 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:04.328 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:04.328 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:04.586 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:04.586 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:04.586 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:04.586 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:04.586 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:04.846 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:04.846 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:04.846 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:04.846 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:05.104 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:05.104 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:05.104 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:05.364 No valid GPT data, bailing 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:37:05.364 00:37:05.364 Discovery Log Number of Records 2, Generation counter 2 00:37:05.364 =====Discovery Log Entry 0====== 00:37:05.364 trtype: tcp 00:37:05.364 adrfam: ipv4 00:37:05.364 subtype: current discovery subsystem 00:37:05.364 treq: not specified, sq flow control disable supported 00:37:05.364 portid: 1 00:37:05.364 trsvcid: 4420 00:37:05.364 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:05.364 traddr: 10.0.0.1 00:37:05.364 eflags: none 00:37:05.364 sectype: none 00:37:05.364 =====Discovery Log Entry 1====== 00:37:05.364 trtype: tcp 00:37:05.364 adrfam: ipv4 00:37:05.364 subtype: nvme subsystem 00:37:05.364 treq: not specified, sq flow control disable supported 00:37:05.364 portid: 1 00:37:05.364 trsvcid: 4420 00:37:05.364 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:05.364 traddr: 10.0.0.1 00:37:05.364 eflags: none 00:37:05.364 sectype: none 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.364 07:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.364 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.651 Initializing NVMe Controllers 00:37:08.651 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:08.651 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:08.651 Initialization complete. Launching workers. 00:37:08.651 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36814, failed: 0 00:37:08.651 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36814, failed to submit 0 00:37:08.651 success 0, unsuccess 36814, failed 0 00:37:08.651 07:05:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.651 07:05:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.651 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.936 Initializing NVMe Controllers 00:37:11.936 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:11.936 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:11.936 Initialization complete. Launching workers. 00:37:11.936 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67326, failed: 0 00:37:11.936 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16986, failed to submit 50340 00:37:11.936 success 0, unsuccess 16986, failed 0 00:37:11.936 07:05:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:11.936 07:05:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:11.936 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.216 Initializing NVMe Controllers 00:37:15.216 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.216 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.216 Initialization complete. Launching workers. 00:37:15.216 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65478, failed: 0 00:37:15.216 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16374, failed to submit 49104 00:37:15.216 success 0, unsuccess 16374, failed 0 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:15.216 07:06:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:15.796 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:15.796 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:15.796 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:16.057 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:16.057 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:16.057 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:16.057 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:16.057 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:16.057 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:16.991 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:16.991 00:37:16.991 real 0m14.139s 00:37:16.991 user 0m5.317s 00:37:16.991 sys 0m3.313s 00:37:16.991 07:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:16.991 07:06:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:16.991 ************************************ 00:37:16.991 END TEST kernel_target_abort 00:37:16.991 ************************************ 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:17.248 rmmod nvme_tcp 00:37:17.248 rmmod nvme_fabrics 00:37:17.248 rmmod nvme_keyring 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 820222 ']' 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 820222 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 820222 ']' 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 820222 00:37:17.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (820222) - No such process 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 820222 is not found' 00:37:17.248 Process with pid 820222 is not found 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:17.248 07:06:04 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:18.179 Waiting for block devices as requested 00:37:18.179 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:18.438 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:18.438 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:18.438 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:18.697 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:18.697 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:18.697 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:18.697 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:18.966 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:18.966 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:18.966 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:18.966 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:19.253 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:19.253 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:19.254 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:19.254 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:19.254 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:19.511 07:06:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.413 07:06:08 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:21.413 00:37:21.413 real 0m37.669s 00:37:21.413 user 1m0.983s 00:37:21.413 sys 0m9.337s 00:37:21.413 07:06:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:21.413 07:06:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.413 ************************************ 00:37:21.413 END TEST nvmf_abort_qd_sizes 00:37:21.413 ************************************ 00:37:21.413 07:06:09 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:21.413 07:06:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:21.413 07:06:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:21.413 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:37:21.672 ************************************ 00:37:21.672 START TEST keyring_file 00:37:21.672 ************************************ 00:37:21.672 07:06:09 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:21.672 * Looking for test storage... 00:37:21.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:21.672 07:06:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:21.672 07:06:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.672 07:06:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.672 07:06:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.672 07:06:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.672 07:06:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.672 07:06:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.672 07:06:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.672 07:06:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:21.672 07:06:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.672 07:06:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tB3FZWFEov 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tB3FZWFEov 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tB3FZWFEov 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tB3FZWFEov 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z8pF2qKgsQ 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:21.673 07:06:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z8pF2qKgsQ 00:37:21.673 07:06:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z8pF2qKgsQ 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.z8pF2qKgsQ 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=826591 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:21.673 07:06:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 826591 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 826591 ']' 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:21.673 07:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.673 [2024-07-15 07:06:09.241446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:21.673 [2024-07-15 07:06:09.241540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826591 ] 00:37:21.673 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.932 [2024-07-15 07:06:09.304663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.932 [2024-07-15 07:06:09.389021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:22.191 07:06:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.191 [2024-07-15 07:06:09.638760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.191 null0 00:37:22.191 [2024-07-15 07:06:09.670815] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:22.191 [2024-07-15 07:06:09.671282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:22.191 [2024-07-15 07:06:09.678838] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.191 07:06:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.191 [2024-07-15 07:06:09.690874] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:22.191 request: 00:37:22.191 { 00:37:22.191 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.191 "secure_channel": false, 00:37:22.191 "listen_address": { 00:37:22.191 "trtype": "tcp", 00:37:22.191 "traddr": "127.0.0.1", 00:37:22.191 "trsvcid": "4420" 00:37:22.191 }, 00:37:22.191 "method": "nvmf_subsystem_add_listener", 00:37:22.191 "req_id": 1 00:37:22.191 } 00:37:22.191 Got JSON-RPC error response 00:37:22.191 response: 00:37:22.191 { 00:37:22.191 "code": -32602, 00:37:22.191 "message": "Invalid parameters" 00:37:22.191 } 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:22.191 07:06:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=826596 00:37:22.191 07:06:09 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:22.191 07:06:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 826596 /var/tmp/bperf.sock 00:37:22.191 07:06:09 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 826596 ']' 00:37:22.192 07:06:09 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.192 07:06:09 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:22.192 07:06:09 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.192 07:06:09 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:22.192 07:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.192 [2024-07-15 07:06:09.737681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:22.192 [2024-07-15 07:06:09.737740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826596 ] 00:37:22.192 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.192 [2024-07-15 07:06:09.798903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.450 [2024-07-15 07:06:09.889666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.450 07:06:09 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:22.450 07:06:09 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:22.450 07:06:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:22.450 07:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:22.708 07:06:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z8pF2qKgsQ 00:37:22.708 07:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z8pF2qKgsQ 00:37:22.966 07:06:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:22.966 07:06:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:22.966 07:06:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.966 07:06:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.966 07:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.223 07:06:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tB3FZWFEov == \/\t\m\p\/\t\m\p\.\t\B\3\F\Z\W\F\E\o\v ]] 00:37:23.224 07:06:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:23.224 07:06:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:23.224 07:06:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.224 07:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.224 07:06:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.481 07:06:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z8pF2qKgsQ == \/\t\m\p\/\t\m\p\.\z\8\p\F\2\q\K\g\s\Q ]] 00:37:23.481 07:06:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:23.481 07:06:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.481 07:06:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.481 07:06:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.481 07:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.481 07:06:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.739 07:06:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:23.739 07:06:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:23.739 07:06:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.739 07:06:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.739 07:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.739 07:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.739 07:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.997 07:06:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:23.997 07:06:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.997 07:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:24.255 [2024-07-15 07:06:11.762386] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:24.255 nvme0n1 00:37:24.255 07:06:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:24.255 07:06:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:24.255 07:06:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.255 07:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.255 07:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.255 07:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.512 07:06:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:24.512 07:06:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:24.512 07:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:24.512 07:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.512 07:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.512 07:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.512 07:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:24.770 07:06:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:24.770 07:06:12 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:25.027 Running I/O for 1 seconds... 00:37:25.961 00:37:25.961 Latency(us) 00:37:25.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.961 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:25.961 nvme0n1 : 1.02 5269.47 20.58 0.00 0.00 24050.56 4199.16 28932.93 00:37:25.961 =================================================================================================================== 00:37:25.961 Total : 5269.47 20.58 0.00 0.00 24050.56 4199.16 28932.93 00:37:25.961 0 00:37:25.961 07:06:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:25.961 07:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:26.218 07:06:13 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:26.218 07:06:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.218 07:06:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.218 07:06:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.218 07:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.218 07:06:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.474 07:06:13 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:26.474 07:06:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:26.474 07:06:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.474 07:06:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.474 07:06:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.474 07:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.474 07:06:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.731 07:06:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:26.731 07:06:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.731 07:06:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:26.731 07:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:26.989 [2024-07-15 07:06:14.465210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:26.989 [2024-07-15 07:06:14.465531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63d730 (107): Transport endpoint is not connected 00:37:26.989 [2024-07-15 07:06:14.466522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63d730 (9): Bad file descriptor 00:37:26.989 [2024-07-15 07:06:14.467520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:26.989 [2024-07-15 07:06:14.467543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:26.989 [2024-07-15 07:06:14.467558] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:26.989 request: 00:37:26.989 { 00:37:26.989 "name": "nvme0", 00:37:26.989 "trtype": "tcp", 00:37:26.989 "traddr": "127.0.0.1", 00:37:26.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.989 "adrfam": "ipv4", 00:37:26.989 "trsvcid": "4420", 00:37:26.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.989 "psk": "key1", 00:37:26.989 "method": "bdev_nvme_attach_controller", 00:37:26.989 "req_id": 1 00:37:26.989 } 00:37:26.989 Got JSON-RPC error response 00:37:26.989 response: 00:37:26.989 { 00:37:26.989 "code": -5, 00:37:26.989 "message": "Input/output error" 00:37:26.989 } 00:37:26.989 07:06:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:26.989 07:06:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:26.989 07:06:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:26.989 07:06:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:26.989 07:06:14 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:26.989 07:06:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.989 07:06:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.989 07:06:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.989 07:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.989 07:06:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:27.247 07:06:14 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:27.247 07:06:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:27.247 07:06:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:27.247 07:06:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.247 07:06:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.247 07:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.247 07:06:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:27.505 07:06:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:27.505 07:06:14 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:27.505 07:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:27.762 07:06:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:27.762 07:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:28.020 07:06:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:28.020 07:06:15 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:28.020 07:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.277 07:06:15 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:28.277 07:06:15 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tB3FZWFEov 00:37:28.277 07:06:15 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.277 07:06:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.277 07:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.535 [2024-07-15 07:06:15.958438] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tB3FZWFEov': 0100660 00:37:28.535 [2024-07-15 07:06:15.958473] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:28.535 request: 00:37:28.535 { 00:37:28.535 "name": "key0", 00:37:28.535 "path": "/tmp/tmp.tB3FZWFEov", 00:37:28.535 "method": "keyring_file_add_key", 00:37:28.535 "req_id": 1 00:37:28.535 } 00:37:28.535 Got JSON-RPC error response 00:37:28.535 response: 00:37:28.535 { 00:37:28.535 "code": -1, 00:37:28.535 "message": "Operation not permitted" 00:37:28.535 } 00:37:28.535 07:06:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:28.535 07:06:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.535 07:06:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.535 07:06:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.535 07:06:15 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tB3FZWFEov 00:37:28.535 07:06:15 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.535 07:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tB3FZWFEov 00:37:28.793 07:06:16 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tB3FZWFEov 00:37:28.793 07:06:16 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:28.793 07:06:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.793 07:06:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.793 07:06:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.793 07:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.793 07:06:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.051 07:06:16 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:29.051 07:06:16 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:29.051 07:06:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.051 07:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.309 [2024-07-15 07:06:16.696443] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tB3FZWFEov': No such file or directory 00:37:29.309 [2024-07-15 07:06:16.696479] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:29.309 [2024-07-15 07:06:16.696512] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:29.309 [2024-07-15 07:06:16.696522] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:29.309 [2024-07-15 07:06:16.696533] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:29.309 request: 00:37:29.309 { 00:37:29.309 "name": "nvme0", 00:37:29.309 "trtype": "tcp", 00:37:29.309 "traddr": "127.0.0.1", 00:37:29.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.309 "adrfam": "ipv4", 00:37:29.309 "trsvcid": "4420", 00:37:29.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.309 "psk": "key0", 00:37:29.309 "method": "bdev_nvme_attach_controller", 00:37:29.309 "req_id": 1 00:37:29.309 } 00:37:29.309 Got JSON-RPC error response 00:37:29.309 response: 00:37:29.309 { 00:37:29.309 "code": -19, 00:37:29.309 "message": "No such device" 00:37:29.309 } 00:37:29.309 07:06:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:29.309 07:06:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:29.309 07:06:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:29.309 07:06:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:29.309 07:06:16 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:29.309 07:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:29.567 07:06:16 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IH52IParHk 00:37:29.567 07:06:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:29.567 07:06:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:29.567 07:06:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IH52IParHk 00:37:29.567 07:06:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IH52IParHk 00:37:29.567 07:06:17 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IH52IParHk 00:37:29.567 07:06:17 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IH52IParHk 00:37:29.567 07:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IH52IParHk 00:37:29.824 07:06:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.824 07:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.082 nvme0n1 00:37:30.082 07:06:17 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:30.082 07:06:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:30.082 07:06:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.082 07:06:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.082 07:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.082 07:06:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.340 07:06:17 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:30.340 07:06:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:30.340 07:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:30.598 07:06:18 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:30.598 07:06:18 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:30.598 07:06:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.598 07:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.598 07:06:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.856 07:06:18 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:30.856 07:06:18 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:30.856 07:06:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:30.856 07:06:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.856 07:06:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.856 07:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.856 07:06:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.114 07:06:18 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:31.114 07:06:18 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:31.114 07:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.372 07:06:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:31.372 07:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.372 07:06:18 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:31.630 07:06:19 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:31.630 07:06:19 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IH52IParHk 00:37:31.630 07:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IH52IParHk 00:37:31.889 07:06:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z8pF2qKgsQ 00:37:31.889 07:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z8pF2qKgsQ 00:37:32.158 07:06:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.158 07:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.467 nvme0n1 00:37:32.467 07:06:19 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:32.467 07:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:32.726 07:06:20 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:32.726 "subsystems": [ 00:37:32.726 { 00:37:32.726 "subsystem": "keyring", 00:37:32.726 "config": [ 00:37:32.726 { 00:37:32.726 "method": "keyring_file_add_key", 00:37:32.726 "params": { 00:37:32.726 "name": "key0", 00:37:32.726 "path": "/tmp/tmp.IH52IParHk" 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "keyring_file_add_key", 00:37:32.726 "params": { 00:37:32.726 "name": "key1", 00:37:32.726 "path": "/tmp/tmp.z8pF2qKgsQ" 00:37:32.726 } 00:37:32.726 } 00:37:32.726 ] 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "subsystem": "iobuf", 00:37:32.726 "config": [ 00:37:32.726 { 00:37:32.726 "method": "iobuf_set_options", 00:37:32.726 "params": { 00:37:32.726 "small_pool_count": 8192, 00:37:32.726 "large_pool_count": 1024, 00:37:32.726 "small_bufsize": 8192, 00:37:32.726 "large_bufsize": 135168 00:37:32.726 } 00:37:32.726 } 00:37:32.726 ] 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "subsystem": "sock", 00:37:32.726 "config": [ 00:37:32.726 { 00:37:32.726 "method": "sock_set_default_impl", 00:37:32.726 "params": { 00:37:32.726 "impl_name": "posix" 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "sock_impl_set_options", 00:37:32.726 "params": { 00:37:32.726 "impl_name": "ssl", 00:37:32.726 "recv_buf_size": 4096, 00:37:32.726 "send_buf_size": 4096, 00:37:32.726 "enable_recv_pipe": true, 00:37:32.726 "enable_quickack": false, 00:37:32.726 "enable_placement_id": 0, 00:37:32.726 "enable_zerocopy_send_server": true, 00:37:32.726 "enable_zerocopy_send_client": false, 00:37:32.726 "zerocopy_threshold": 0, 00:37:32.726 "tls_version": 0, 00:37:32.726 "enable_ktls": false 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "sock_impl_set_options", 00:37:32.726 "params": { 00:37:32.726 "impl_name": "posix", 00:37:32.726 "recv_buf_size": 2097152, 00:37:32.726 "send_buf_size": 2097152, 00:37:32.726 "enable_recv_pipe": true, 00:37:32.726 "enable_quickack": false, 00:37:32.726 "enable_placement_id": 0, 00:37:32.726 "enable_zerocopy_send_server": true, 00:37:32.726 "enable_zerocopy_send_client": false, 00:37:32.726 "zerocopy_threshold": 0, 00:37:32.726 "tls_version": 0, 00:37:32.726 "enable_ktls": false 00:37:32.726 } 00:37:32.726 } 00:37:32.726 ] 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "subsystem": "vmd", 00:37:32.726 "config": [] 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "subsystem": "accel", 00:37:32.726 "config": [ 00:37:32.726 { 00:37:32.726 "method": "accel_set_options", 00:37:32.726 "params": { 00:37:32.726 "small_cache_size": 128, 00:37:32.726 "large_cache_size": 16, 00:37:32.726 "task_count": 2048, 00:37:32.726 "sequence_count": 2048, 00:37:32.726 "buf_count": 2048 00:37:32.726 } 00:37:32.726 } 00:37:32.726 ] 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "subsystem": "bdev", 00:37:32.726 "config": [ 00:37:32.726 { 00:37:32.726 "method": "bdev_set_options", 00:37:32.726 "params": { 00:37:32.726 "bdev_io_pool_size": 65535, 00:37:32.726 "bdev_io_cache_size": 256, 00:37:32.726 "bdev_auto_examine": true, 00:37:32.726 "iobuf_small_cache_size": 128, 00:37:32.726 "iobuf_large_cache_size": 16 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "bdev_raid_set_options", 00:37:32.726 "params": { 00:37:32.726 "process_window_size_kb": 1024 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "bdev_iscsi_set_options", 00:37:32.726 "params": { 00:37:32.726 "timeout_sec": 30 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "bdev_nvme_set_options", 00:37:32.726 "params": { 00:37:32.726 "action_on_timeout": "none", 00:37:32.726 "timeout_us": 0, 00:37:32.726 "timeout_admin_us": 0, 00:37:32.726 "keep_alive_timeout_ms": 10000, 00:37:32.726 "arbitration_burst": 0, 00:37:32.726 "low_priority_weight": 0, 00:37:32.726 "medium_priority_weight": 0, 00:37:32.726 "high_priority_weight": 0, 00:37:32.726 "nvme_adminq_poll_period_us": 10000, 00:37:32.726 "nvme_ioq_poll_period_us": 0, 00:37:32.726 "io_queue_requests": 512, 00:37:32.726 "delay_cmd_submit": true, 00:37:32.726 "transport_retry_count": 4, 00:37:32.726 "bdev_retry_count": 3, 00:37:32.726 "transport_ack_timeout": 0, 00:37:32.726 "ctrlr_loss_timeout_sec": 0, 00:37:32.726 "reconnect_delay_sec": 0, 00:37:32.726 "fast_io_fail_timeout_sec": 0, 00:37:32.726 "disable_auto_failback": false, 00:37:32.726 "generate_uuids": false, 00:37:32.726 "transport_tos": 0, 00:37:32.726 "nvme_error_stat": false, 00:37:32.726 "rdma_srq_size": 0, 00:37:32.726 "io_path_stat": false, 00:37:32.726 "allow_accel_sequence": false, 00:37:32.726 "rdma_max_cq_size": 0, 00:37:32.726 "rdma_cm_event_timeout_ms": 0, 00:37:32.726 "dhchap_digests": [ 00:37:32.726 "sha256", 00:37:32.726 "sha384", 00:37:32.726 "sha512" 00:37:32.726 ], 00:37:32.726 "dhchap_dhgroups": [ 00:37:32.726 "null", 00:37:32.726 "ffdhe2048", 00:37:32.726 "ffdhe3072", 00:37:32.726 "ffdhe4096", 00:37:32.726 "ffdhe6144", 00:37:32.726 "ffdhe8192" 00:37:32.726 ] 00:37:32.726 } 00:37:32.726 }, 00:37:32.726 { 00:37:32.726 "method": "bdev_nvme_attach_controller", 00:37:32.726 "params": { 00:37:32.726 "name": "nvme0", 00:37:32.726 "trtype": "TCP", 00:37:32.727 "adrfam": "IPv4", 00:37:32.727 "traddr": "127.0.0.1", 00:37:32.727 "trsvcid": "4420", 00:37:32.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.727 "prchk_reftag": false, 00:37:32.727 "prchk_guard": false, 00:37:32.727 "ctrlr_loss_timeout_sec": 0, 00:37:32.727 "reconnect_delay_sec": 0, 00:37:32.727 "fast_io_fail_timeout_sec": 0, 00:37:32.727 "psk": "key0", 00:37:32.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.727 "hdgst": false, 00:37:32.727 "ddgst": false 00:37:32.727 } 00:37:32.727 }, 00:37:32.727 { 00:37:32.727 "method": "bdev_nvme_set_hotplug", 00:37:32.727 "params": { 00:37:32.727 "period_us": 100000, 00:37:32.727 "enable": false 00:37:32.727 } 00:37:32.727 }, 00:37:32.727 { 00:37:32.727 "method": "bdev_wait_for_examine" 00:37:32.727 } 00:37:32.727 ] 00:37:32.727 }, 00:37:32.727 { 00:37:32.727 "subsystem": "nbd", 00:37:32.727 "config": [] 00:37:32.727 } 00:37:32.727 ] 00:37:32.727 }' 00:37:32.727 07:06:20 keyring_file -- keyring/file.sh@114 -- # killprocess 826596 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 826596 ']' 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@950 -- # kill -0 826596 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 826596 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 826596' 00:37:32.727 killing process with pid 826596 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@965 -- # kill 826596 00:37:32.727 Received shutdown signal, test time was about 1.000000 seconds 00:37:32.727 00:37:32.727 Latency(us) 00:37:32.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.727 =================================================================================================================== 00:37:32.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.727 07:06:20 keyring_file -- common/autotest_common.sh@970 -- # wait 826596 00:37:32.985 07:06:20 keyring_file -- keyring/file.sh@117 -- # bperfpid=828052 00:37:32.985 07:06:20 keyring_file -- keyring/file.sh@119 -- # waitforlisten 828052 /var/tmp/bperf.sock 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 828052 ']' 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.985 07:06:20 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:32.985 07:06:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:32.985 07:06:20 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:32.985 "subsystems": [ 00:37:32.985 { 00:37:32.985 "subsystem": "keyring", 00:37:32.985 "config": [ 00:37:32.985 { 00:37:32.985 "method": "keyring_file_add_key", 00:37:32.985 "params": { 00:37:32.985 "name": "key0", 00:37:32.985 "path": "/tmp/tmp.IH52IParHk" 00:37:32.985 } 00:37:32.985 }, 00:37:32.985 { 00:37:32.985 "method": "keyring_file_add_key", 00:37:32.985 "params": { 00:37:32.985 "name": "key1", 00:37:32.985 "path": "/tmp/tmp.z8pF2qKgsQ" 00:37:32.985 } 00:37:32.985 } 00:37:32.985 ] 00:37:32.985 }, 00:37:32.985 { 00:37:32.985 "subsystem": "iobuf", 00:37:32.985 "config": [ 00:37:32.985 { 00:37:32.985 "method": "iobuf_set_options", 00:37:32.985 "params": { 00:37:32.985 "small_pool_count": 8192, 00:37:32.986 "large_pool_count": 1024, 00:37:32.986 "small_bufsize": 8192, 00:37:32.986 "large_bufsize": 135168 00:37:32.986 } 00:37:32.986 } 00:37:32.986 ] 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "subsystem": "sock", 00:37:32.986 "config": [ 00:37:32.986 { 00:37:32.986 "method": "sock_set_default_impl", 00:37:32.986 "params": { 00:37:32.986 "impl_name": "posix" 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "sock_impl_set_options", 00:37:32.986 "params": { 00:37:32.986 "impl_name": "ssl", 00:37:32.986 "recv_buf_size": 4096, 00:37:32.986 "send_buf_size": 4096, 00:37:32.986 "enable_recv_pipe": true, 00:37:32.986 "enable_quickack": false, 00:37:32.986 "enable_placement_id": 0, 00:37:32.986 "enable_zerocopy_send_server": true, 00:37:32.986 "enable_zerocopy_send_client": false, 00:37:32.986 "zerocopy_threshold": 0, 00:37:32.986 "tls_version": 0, 00:37:32.986 "enable_ktls": false 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "sock_impl_set_options", 00:37:32.986 "params": { 00:37:32.986 "impl_name": "posix", 00:37:32.986 "recv_buf_size": 2097152, 00:37:32.986 "send_buf_size": 2097152, 00:37:32.986 "enable_recv_pipe": true, 00:37:32.986 "enable_quickack": false, 00:37:32.986 "enable_placement_id": 0, 00:37:32.986 "enable_zerocopy_send_server": true, 00:37:32.986 "enable_zerocopy_send_client": false, 00:37:32.986 "zerocopy_threshold": 0, 00:37:32.986 "tls_version": 0, 00:37:32.986 "enable_ktls": false 00:37:32.986 } 00:37:32.986 } 00:37:32.986 ] 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "subsystem": "vmd", 00:37:32.986 "config": [] 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "subsystem": "accel", 00:37:32.986 "config": [ 00:37:32.986 { 00:37:32.986 "method": "accel_set_options", 00:37:32.986 "params": { 00:37:32.986 "small_cache_size": 128, 00:37:32.986 "large_cache_size": 16, 00:37:32.986 "task_count": 2048, 00:37:32.986 "sequence_count": 2048, 00:37:32.986 "buf_count": 2048 00:37:32.986 } 00:37:32.986 } 00:37:32.986 ] 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "subsystem": "bdev", 00:37:32.986 "config": [ 00:37:32.986 { 00:37:32.986 "method": "bdev_set_options", 00:37:32.986 "params": { 00:37:32.986 "bdev_io_pool_size": 65535, 00:37:32.986 "bdev_io_cache_size": 256, 00:37:32.986 "bdev_auto_examine": true, 00:37:32.986 "iobuf_small_cache_size": 128, 00:37:32.986 "iobuf_large_cache_size": 16 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_raid_set_options", 00:37:32.986 "params": { 00:37:32.986 "process_window_size_kb": 1024 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_iscsi_set_options", 00:37:32.986 "params": { 00:37:32.986 "timeout_sec": 30 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_nvme_set_options", 00:37:32.986 "params": { 00:37:32.986 "action_on_timeout": "none", 00:37:32.986 "timeout_us": 0, 00:37:32.986 "timeout_admin_us": 0, 00:37:32.986 "keep_alive_timeout_ms": 10000, 00:37:32.986 "arbitration_burst": 0, 00:37:32.986 "low_priority_weight": 0, 00:37:32.986 "medium_priority_weight": 0, 00:37:32.986 "high_priority_weight": 0, 00:37:32.986 "nvme_adminq_poll_period_us": 10000, 00:37:32.986 "nvme_ioq_poll_period_us": 0, 00:37:32.986 "io_queue_requests": 512, 00:37:32.986 "delay_cmd_submit": true, 00:37:32.986 "transport_retry_count": 4, 00:37:32.986 "bdev_retry_count": 3, 00:37:32.986 "transport_ack_timeout": 0, 00:37:32.986 "ctrlr_loss_timeout_sec": 0, 00:37:32.986 "reconnect_delay_sec": 0, 00:37:32.986 "fast_io_fail_timeout_sec": 0, 00:37:32.986 "disable_auto_failback": false, 00:37:32.986 "generate_uuids": false, 00:37:32.986 "transport_tos": 0, 00:37:32.986 "nvme_error_stat": false, 00:37:32.986 "rdma_srq_size": 0, 00:37:32.986 "io_path_stat": false, 00:37:32.986 "allow_accel_sequence": false, 00:37:32.986 "rdma_max_cq_size": 0, 00:37:32.986 "rdma_cm_event_timeout_ms": 0, 00:37:32.986 "dhchap_digests": [ 00:37:32.986 "sha256", 00:37:32.986 "sha384", 00:37:32.986 "sha512" 00:37:32.986 ], 00:37:32.986 "dhchap_dhgroups": [ 00:37:32.986 "null", 00:37:32.986 "ffdhe2048", 00:37:32.986 "ffdhe3072", 00:37:32.986 "ffdhe4096", 00:37:32.986 "ffdhe6144", 00:37:32.986 "ffdhe8192" 00:37:32.986 ] 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_nvme_attach_controller", 00:37:32.986 "params": { 00:37:32.986 "name": "nvme0", 00:37:32.986 "trtype": "TCP", 00:37:32.986 "adrfam": "IPv4", 00:37:32.986 "traddr": "127.0.0.1", 00:37:32.986 "trsvcid": "4420", 00:37:32.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.986 "prchk_reftag": false, 00:37:32.986 "prchk_guard": false, 00:37:32.986 "ctrlr_loss_timeout_sec": 0, 00:37:32.986 "reconnect_delay_sec": 0, 00:37:32.986 "fast_io_fail_timeout_sec": 0, 00:37:32.986 "psk": "key0", 00:37:32.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.986 "hdgst": false, 00:37:32.986 "ddgst": false 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_nvme_set_hotplug", 00:37:32.986 "params": { 00:37:32.986 "period_us": 100000, 00:37:32.986 "enable": false 00:37:32.986 } 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "method": "bdev_wait_for_examine" 00:37:32.986 } 00:37:32.986 ] 00:37:32.986 }, 00:37:32.986 { 00:37:32.986 "subsystem": "nbd", 00:37:32.986 "config": [] 00:37:32.986 } 00:37:32.986 ] 00:37:32.986 }' 00:37:32.986 [2024-07-15 07:06:20.502479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:32.986 [2024-07-15 07:06:20.502572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828052 ] 00:37:32.986 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.986 [2024-07-15 07:06:20.561906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.245 [2024-07-15 07:06:20.648925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.245 [2024-07-15 07:06:20.829644] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:34.179 07:06:21 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:34.179 07:06:21 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:34.179 07:06:21 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.179 07:06:21 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:34.179 07:06:21 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:34.179 07:06:21 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.179 07:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.437 07:06:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:34.437 07:06:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:34.437 07:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.437 07:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.437 07:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.437 07:06:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.437 07:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.695 07:06:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:34.695 07:06:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:34.695 07:06:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:34.695 07:06:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:34.953 07:06:22 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:34.953 07:06:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:34.953 07:06:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IH52IParHk /tmp/tmp.z8pF2qKgsQ 00:37:34.953 07:06:22 keyring_file -- keyring/file.sh@20 -- # killprocess 828052 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 828052 ']' 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@950 -- # kill -0 828052 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828052 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828052' 00:37:34.953 killing process with pid 828052 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@965 -- # kill 828052 00:37:34.953 Received shutdown signal, test time was about 1.000000 seconds 00:37:34.953 00:37:34.953 Latency(us) 00:37:34.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.953 =================================================================================================================== 00:37:34.953 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:34.953 07:06:22 keyring_file -- common/autotest_common.sh@970 -- # wait 828052 00:37:35.211 07:06:22 keyring_file -- keyring/file.sh@21 -- # killprocess 826591 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 826591 ']' 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@950 -- # kill -0 826591 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 826591 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 826591' 00:37:35.211 killing process with pid 826591 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@965 -- # kill 826591 00:37:35.211 [2024-07-15 07:06:22.706614] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:35.211 07:06:22 keyring_file -- common/autotest_common.sh@970 -- # wait 826591 00:37:35.469 00:37:35.469 real 0m14.031s 00:37:35.469 user 0m34.899s 00:37:35.469 sys 0m3.292s 00:37:35.469 07:06:23 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:35.469 07:06:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:35.469 ************************************ 00:37:35.469 END TEST keyring_file 00:37:35.469 ************************************ 00:37:35.744 07:06:23 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:35.744 07:06:23 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:35.744 07:06:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:35.744 07:06:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:35.744 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:37:35.744 ************************************ 00:37:35.744 START TEST keyring_linux 00:37:35.744 ************************************ 00:37:35.744 07:06:23 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:35.744 * Looking for test storage... 00:37:35.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:35.744 07:06:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:35.744 07:06:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.744 07:06:23 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.744 07:06:23 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.744 07:06:23 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.744 07:06:23 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.744 07:06:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.745 07:06:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.745 07:06:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.745 07:06:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:35.745 07:06:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:35.745 07:06:23 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:35.745 07:06:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:35.745 07:06:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:35.746 /tmp/:spdk-test:key0 00:37:35.746 07:06:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:35.746 07:06:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:35.746 07:06:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:35.746 /tmp/:spdk-test:key1 00:37:35.747 07:06:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=828416 00:37:35.747 07:06:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:35.747 07:06:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 828416 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 828416 ']' 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:35.747 07:06:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:35.747 [2024-07-15 07:06:23.304815] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:35.747 [2024-07-15 07:06:23.304919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828416 ] 00:37:35.747 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.006 [2024-07-15 07:06:23.366470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.006 [2024-07-15 07:06:23.453823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:36.262 [2024-07-15 07:06:23.713787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.262 null0 00:37:36.262 [2024-07-15 07:06:23.745827] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:36.262 [2024-07-15 07:06:23.746323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:36.262 606965715 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:36.262 267083865 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=828508 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:36.262 07:06:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 828508 /var/tmp/bperf.sock 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 828508 ']' 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:36.262 07:06:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:36.262 [2024-07-15 07:06:23.814474] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:36.262 [2024-07-15 07:06:23.814555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828508 ] 00:37:36.262 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.519 [2024-07-15 07:06:23.882755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.519 [2024-07-15 07:06:23.974506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.519 07:06:24 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:36.519 07:06:24 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:36.519 07:06:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:36.519 07:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:36.776 07:06:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:36.776 07:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:37.033 07:06:24 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:37.033 07:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:37.291 [2024-07-15 07:06:24.790732] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:37.291 nvme0n1 00:37:37.291 07:06:24 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:37.291 07:06:24 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:37.291 07:06:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:37.291 07:06:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:37.291 07:06:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:37.291 07:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.549 07:06:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:37.549 07:06:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:37.549 07:06:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:37.549 07:06:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:37.549 07:06:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.549 07:06:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.549 07:06:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@25 -- # sn=606965715 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 606965715 == \6\0\6\9\6\5\7\1\5 ]] 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 606965715 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:37.806 07:06:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:38.064 Running I/O for 1 seconds... 00:37:38.999 00:37:38.999 Latency(us) 00:37:38.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.999 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:38.999 nvme0n1 : 1.01 5371.19 20.98 0.00 0.00 23664.28 9611.95 33787.45 00:37:38.999 =================================================================================================================== 00:37:38.999 Total : 5371.19 20.98 0.00 0.00 23664.28 9611.95 33787.45 00:37:38.999 0 00:37:38.999 07:06:26 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:38.999 07:06:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:39.257 07:06:26 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:39.257 07:06:26 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:39.257 07:06:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:39.257 07:06:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:39.257 07:06:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.257 07:06:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:39.515 07:06:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:39.515 07:06:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:39.515 07:06:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:39.515 07:06:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.515 07:06:27 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.515 07:06:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.774 [2024-07-15 07:06:27.260733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:39.774 [2024-07-15 07:06:27.261045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f8ea0 (107): Transport endpoint is not connected 00:37:39.774 [2024-07-15 07:06:27.262037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f8ea0 (9): Bad file descriptor 00:37:39.774 [2024-07-15 07:06:27.263036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.774 [2024-07-15 07:06:27.263057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:39.774 [2024-07-15 07:06:27.263071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.774 request: 00:37:39.774 { 00:37:39.774 "name": "nvme0", 00:37:39.774 "trtype": "tcp", 00:37:39.774 "traddr": "127.0.0.1", 00:37:39.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.774 "adrfam": "ipv4", 00:37:39.774 "trsvcid": "4420", 00:37:39.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.774 "psk": ":spdk-test:key1", 00:37:39.774 "method": "bdev_nvme_attach_controller", 00:37:39.774 "req_id": 1 00:37:39.774 } 00:37:39.774 Got JSON-RPC error response 00:37:39.774 response: 00:37:39.774 { 00:37:39.774 "code": -5, 00:37:39.774 "message": "Input/output error" 00:37:39.774 } 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@33 -- # sn=606965715 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 606965715 00:37:39.774 1 links removed 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@33 -- # sn=267083865 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 267083865 00:37:39.774 1 links removed 00:37:39.774 07:06:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 828508 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 828508 ']' 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 828508 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828508 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828508' 00:37:39.774 killing process with pid 828508 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@965 -- # kill 828508 00:37:39.774 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.774 00:37:39.774 Latency(us) 00:37:39.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.774 =================================================================================================================== 00:37:39.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:39.774 07:06:27 keyring_linux -- common/autotest_common.sh@970 -- # wait 828508 00:37:40.032 07:06:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 828416 00:37:40.032 07:06:27 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 828416 ']' 00:37:40.032 07:06:27 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 828416 00:37:40.032 07:06:27 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:40.032 07:06:27 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:40.032 07:06:27 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 828416 00:37:40.033 07:06:27 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:40.033 07:06:27 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:40.033 07:06:27 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 828416' 00:37:40.033 killing process with pid 828416 00:37:40.033 07:06:27 keyring_linux -- common/autotest_common.sh@965 -- # kill 828416 00:37:40.033 07:06:27 keyring_linux -- common/autotest_common.sh@970 -- # wait 828416 00:37:40.599 00:37:40.599 real 0m4.858s 00:37:40.599 user 0m9.119s 00:37:40.599 sys 0m1.544s 00:37:40.599 07:06:27 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:40.599 07:06:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:40.599 ************************************ 00:37:40.599 END TEST keyring_linux 00:37:40.599 ************************************ 00:37:40.599 07:06:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:40.599 07:06:28 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:40.599 07:06:28 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:40.599 07:06:28 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:40.599 07:06:28 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:40.599 07:06:28 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:40.599 07:06:28 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:40.599 07:06:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:40.599 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:37:40.599 07:06:28 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:40.599 07:06:28 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:40.599 07:06:28 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:40.599 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:37:42.501 INFO: APP EXITING 00:37:42.501 INFO: killing all VMs 00:37:42.501 INFO: killing vhost app 00:37:42.501 INFO: EXIT DONE 00:37:43.433 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:43.433 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:43.433 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:43.433 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:43.433 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:43.433 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:43.433 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:43.433 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:43.433 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:43.433 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:43.433 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:43.433 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:43.433 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:43.433 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:43.690 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:43.690 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:43.690 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:44.623 Cleaning 00:37:44.623 Removing: /var/run/dpdk/spdk0/config 00:37:44.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:44.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:44.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:44.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:44.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:44.880 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:44.880 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:44.880 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:44.880 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:44.880 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:44.880 Removing: /var/run/dpdk/spdk1/config 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:44.880 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:44.880 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:44.880 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:44.880 Removing: /var/run/dpdk/spdk2/config 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:44.880 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:44.880 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:44.880 Removing: /var/run/dpdk/spdk3/config 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:44.880 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:44.880 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:44.880 Removing: /var/run/dpdk/spdk4/config 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:44.880 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:44.880 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:44.880 Removing: /dev/shm/bdev_svc_trace.1 00:37:44.881 Removing: /dev/shm/nvmf_trace.0 00:37:44.881 Removing: /dev/shm/spdk_tgt_trace.pid508383 00:37:44.881 Removing: /var/run/dpdk/spdk0 00:37:44.881 Removing: /var/run/dpdk/spdk1 00:37:44.881 Removing: /var/run/dpdk/spdk2 00:37:44.881 Removing: /var/run/dpdk/spdk3 00:37:44.881 Removing: /var/run/dpdk/spdk4 00:37:44.881 Removing: /var/run/dpdk/spdk_pid506807 00:37:44.881 Removing: /var/run/dpdk/spdk_pid507537 00:37:44.881 Removing: /var/run/dpdk/spdk_pid508383 00:37:44.881 Removing: /var/run/dpdk/spdk_pid508786 00:37:44.881 Removing: /var/run/dpdk/spdk_pid509473 00:37:44.881 Removing: /var/run/dpdk/spdk_pid509621 00:37:44.881 Removing: /var/run/dpdk/spdk_pid510336 00:37:44.881 Removing: /var/run/dpdk/spdk_pid510350 00:37:44.881 Removing: /var/run/dpdk/spdk_pid510586 00:37:44.881 Removing: /var/run/dpdk/spdk_pid511779 00:37:44.881 Removing: /var/run/dpdk/spdk_pid512819 00:37:44.881 Removing: /var/run/dpdk/spdk_pid513012 00:37:44.881 Removing: /var/run/dpdk/spdk_pid513216 00:37:44.881 Removing: /var/run/dpdk/spdk_pid513519 00:37:44.881 Removing: /var/run/dpdk/spdk_pid513707 00:37:44.881 Removing: /var/run/dpdk/spdk_pid513870 00:37:44.881 Removing: /var/run/dpdk/spdk_pid514022 00:37:44.881 Removing: /var/run/dpdk/spdk_pid514202 00:37:44.881 Removing: /var/run/dpdk/spdk_pid514787 00:37:44.881 Removing: /var/run/dpdk/spdk_pid517135 00:37:44.881 Removing: /var/run/dpdk/spdk_pid517331 00:37:44.881 Removing: /var/run/dpdk/spdk_pid517584 00:37:44.881 Removing: /var/run/dpdk/spdk_pid517598 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518012 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518145 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518446 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518574 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518742 00:37:44.881 Removing: /var/run/dpdk/spdk_pid518969 00:37:44.881 Removing: /var/run/dpdk/spdk_pid519410 00:37:44.881 Removing: /var/run/dpdk/spdk_pid519548 00:37:44.881 Removing: /var/run/dpdk/spdk_pid519920 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520078 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520290 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520443 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520586 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520655 00:37:44.881 Removing: /var/run/dpdk/spdk_pid520877 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521080 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521240 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521398 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521674 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521828 00:37:44.881 Removing: /var/run/dpdk/spdk_pid521986 00:37:44.881 Removing: /var/run/dpdk/spdk_pid522144 00:37:44.881 Removing: /var/run/dpdk/spdk_pid522414 00:37:44.881 Removing: /var/run/dpdk/spdk_pid522574 00:37:44.881 Removing: /var/run/dpdk/spdk_pid522727 00:37:44.881 Removing: /var/run/dpdk/spdk_pid522908 00:37:44.881 Removing: /var/run/dpdk/spdk_pid523162 00:37:44.881 Removing: /var/run/dpdk/spdk_pid523321 00:37:44.881 Removing: /var/run/dpdk/spdk_pid523476 00:37:44.881 Removing: /var/run/dpdk/spdk_pid523741 00:37:44.881 Removing: /var/run/dpdk/spdk_pid523910 00:37:44.881 Removing: /var/run/dpdk/spdk_pid524064 00:37:44.881 Removing: /var/run/dpdk/spdk_pid524228 00:37:44.881 Removing: /var/run/dpdk/spdk_pid524499 00:37:44.881 Removing: /var/run/dpdk/spdk_pid524571 00:37:44.881 Removing: /var/run/dpdk/spdk_pid524775 00:37:44.881 Removing: /var/run/dpdk/spdk_pid526821 00:37:44.881 Removing: /var/run/dpdk/spdk_pid580644 00:37:44.881 Removing: /var/run/dpdk/spdk_pid583141 00:37:44.881 Removing: /var/run/dpdk/spdk_pid590103 00:37:44.881 Removing: /var/run/dpdk/spdk_pid593386 00:37:44.881 Removing: /var/run/dpdk/spdk_pid595735 00:37:44.881 Removing: /var/run/dpdk/spdk_pid596133 00:37:44.881 Removing: /var/run/dpdk/spdk_pid603388 00:37:44.881 Removing: /var/run/dpdk/spdk_pid603392 00:37:44.881 Removing: /var/run/dpdk/spdk_pid604043 00:37:44.881 Removing: /var/run/dpdk/spdk_pid604579 00:37:44.881 Removing: /var/run/dpdk/spdk_pid605242 00:37:44.881 Removing: /var/run/dpdk/spdk_pid605639 00:37:44.881 Removing: /var/run/dpdk/spdk_pid605641 00:37:44.881 Removing: /var/run/dpdk/spdk_pid605898 00:37:44.881 Removing: /var/run/dpdk/spdk_pid605919 00:37:45.139 Removing: /var/run/dpdk/spdk_pid606037 00:37:45.139 Removing: /var/run/dpdk/spdk_pid606579 00:37:45.139 Removing: /var/run/dpdk/spdk_pid607227 00:37:45.139 Removing: /var/run/dpdk/spdk_pid607892 00:37:45.139 Removing: /var/run/dpdk/spdk_pid608401 00:37:45.139 Removing: /var/run/dpdk/spdk_pid608403 00:37:45.139 Removing: /var/run/dpdk/spdk_pid608548 00:37:45.139 Removing: /var/run/dpdk/spdk_pid609938 00:37:45.139 Removing: /var/run/dpdk/spdk_pid610768 00:37:45.139 Removing: /var/run/dpdk/spdk_pid615996 00:37:45.139 Removing: /var/run/dpdk/spdk_pid616267 00:37:45.139 Removing: /var/run/dpdk/spdk_pid618767 00:37:45.139 Removing: /var/run/dpdk/spdk_pid622462 00:37:45.139 Removing: /var/run/dpdk/spdk_pid624515 00:37:45.139 Removing: /var/run/dpdk/spdk_pid630890 00:37:45.139 Removing: /var/run/dpdk/spdk_pid635965 00:37:45.139 Removing: /var/run/dpdk/spdk_pid637263 00:37:45.139 Removing: /var/run/dpdk/spdk_pid637930 00:37:45.139 Removing: /var/run/dpdk/spdk_pid648615 00:37:45.139 Removing: /var/run/dpdk/spdk_pid650719 00:37:45.139 Removing: /var/run/dpdk/spdk_pid675932 00:37:45.139 Removing: /var/run/dpdk/spdk_pid678755 00:37:45.139 Removing: /var/run/dpdk/spdk_pid679933 00:37:45.139 Removing: /var/run/dpdk/spdk_pid681129 00:37:45.139 Removing: /var/run/dpdk/spdk_pid681268 00:37:45.139 Removing: /var/run/dpdk/spdk_pid681398 00:37:45.139 Removing: /var/run/dpdk/spdk_pid681420 00:37:45.139 Removing: /var/run/dpdk/spdk_pid681851 00:37:45.139 Removing: /var/run/dpdk/spdk_pid683158 00:37:45.139 Removing: /var/run/dpdk/spdk_pid683768 00:37:45.139 Removing: /var/run/dpdk/spdk_pid684071 00:37:45.139 Removing: /var/run/dpdk/spdk_pid685683 00:37:45.139 Removing: /var/run/dpdk/spdk_pid686103 00:37:45.139 Removing: /var/run/dpdk/spdk_pid686549 00:37:45.139 Removing: /var/run/dpdk/spdk_pid689051 00:37:45.139 Removing: /var/run/dpdk/spdk_pid692306 00:37:45.139 Removing: /var/run/dpdk/spdk_pid695832 00:37:45.139 Removing: /var/run/dpdk/spdk_pid719346 00:37:45.139 Removing: /var/run/dpdk/spdk_pid722067 00:37:45.139 Removing: /var/run/dpdk/spdk_pid725873 00:37:45.139 Removing: /var/run/dpdk/spdk_pid726922 00:37:45.139 Removing: /var/run/dpdk/spdk_pid728402 00:37:45.139 Removing: /var/run/dpdk/spdk_pid731070 00:37:45.139 Removing: /var/run/dpdk/spdk_pid733302 00:37:45.139 Removing: /var/run/dpdk/spdk_pid737503 00:37:45.139 Removing: /var/run/dpdk/spdk_pid737506 00:37:45.139 Removing: /var/run/dpdk/spdk_pid740266 00:37:45.139 Removing: /var/run/dpdk/spdk_pid740404 00:37:45.139 Removing: /var/run/dpdk/spdk_pid740559 00:37:45.139 Removing: /var/run/dpdk/spdk_pid740928 00:37:45.139 Removing: /var/run/dpdk/spdk_pid740938 00:37:45.139 Removing: /var/run/dpdk/spdk_pid742007 00:37:45.139 Removing: /var/run/dpdk/spdk_pid743183 00:37:45.139 Removing: /var/run/dpdk/spdk_pid744364 00:37:45.139 Removing: /var/run/dpdk/spdk_pid745541 00:37:45.139 Removing: /var/run/dpdk/spdk_pid746715 00:37:45.139 Removing: /var/run/dpdk/spdk_pid747896 00:37:45.139 Removing: /var/run/dpdk/spdk_pid751708 00:37:45.139 Removing: /var/run/dpdk/spdk_pid752038 00:37:45.139 Removing: /var/run/dpdk/spdk_pid753329 00:37:45.139 Removing: /var/run/dpdk/spdk_pid754167 00:37:45.139 Removing: /var/run/dpdk/spdk_pid757756 00:37:45.139 Removing: /var/run/dpdk/spdk_pid760347 00:37:45.139 Removing: /var/run/dpdk/spdk_pid763633 00:37:45.139 Removing: /var/run/dpdk/spdk_pid766938 00:37:45.139 Removing: /var/run/dpdk/spdk_pid773141 00:37:45.139 Removing: /var/run/dpdk/spdk_pid777487 00:37:45.139 Removing: /var/run/dpdk/spdk_pid777492 00:37:45.139 Removing: /var/run/dpdk/spdk_pid789677 00:37:45.139 Removing: /var/run/dpdk/spdk_pid790083 00:37:45.139 Removing: /var/run/dpdk/spdk_pid790508 00:37:45.139 Removing: /var/run/dpdk/spdk_pid791020 00:37:45.139 Removing: /var/run/dpdk/spdk_pid791482 00:37:45.139 Removing: /var/run/dpdk/spdk_pid791970 00:37:45.139 Removing: /var/run/dpdk/spdk_pid792521 00:37:45.139 Removing: /var/run/dpdk/spdk_pid792927 00:37:45.139 Removing: /var/run/dpdk/spdk_pid795830 00:37:45.139 Removing: /var/run/dpdk/spdk_pid796065 00:37:45.139 Removing: /var/run/dpdk/spdk_pid799852 00:37:45.139 Removing: /var/run/dpdk/spdk_pid799905 00:37:45.139 Removing: /var/run/dpdk/spdk_pid801626 00:37:45.139 Removing: /var/run/dpdk/spdk_pid806546 00:37:45.139 Removing: /var/run/dpdk/spdk_pid806551 00:37:45.139 Removing: /var/run/dpdk/spdk_pid809441 00:37:45.139 Removing: /var/run/dpdk/spdk_pid810838 00:37:45.139 Removing: /var/run/dpdk/spdk_pid812239 00:37:45.139 Removing: /var/run/dpdk/spdk_pid813027 00:37:45.139 Removing: /var/run/dpdk/spdk_pid814499 00:37:45.139 Removing: /var/run/dpdk/spdk_pid815378 00:37:45.139 Removing: /var/run/dpdk/spdk_pid820616 00:37:45.139 Removing: /var/run/dpdk/spdk_pid820910 00:37:45.139 Removing: /var/run/dpdk/spdk_pid821301 00:37:45.140 Removing: /var/run/dpdk/spdk_pid822858 00:37:45.140 Removing: /var/run/dpdk/spdk_pid823135 00:37:45.140 Removing: /var/run/dpdk/spdk_pid823532 00:37:45.140 Removing: /var/run/dpdk/spdk_pid826591 00:37:45.140 Removing: /var/run/dpdk/spdk_pid826596 00:37:45.140 Removing: /var/run/dpdk/spdk_pid828052 00:37:45.140 Removing: /var/run/dpdk/spdk_pid828416 00:37:45.140 Removing: /var/run/dpdk/spdk_pid828508 00:37:45.140 Clean 00:37:45.140 07:06:32 -- common/autotest_common.sh@1447 -- # return 0 00:37:45.140 07:06:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:45.140 07:06:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.140 07:06:32 -- common/autotest_common.sh@10 -- # set +x 00:37:45.398 07:06:32 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:45.398 07:06:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.398 07:06:32 -- common/autotest_common.sh@10 -- # set +x 00:37:45.398 07:06:32 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:45.398 07:06:32 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:45.398 07:06:32 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:45.398 07:06:32 -- spdk/autotest.sh@391 -- # hash lcov 00:37:45.398 07:06:32 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:45.398 07:06:32 -- spdk/autotest.sh@393 -- # hostname 00:37:45.398 07:06:32 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:45.398 geninfo: WARNING: invalid characters removed from testname! 00:38:24.173 07:07:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.173 07:07:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:25.555 07:07:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:28.840 07:07:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.375 07:07:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:34.662 07:07:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.193 07:07:24 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:37.193 07:07:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.193 07:07:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:37.193 07:07:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.193 07:07:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.193 07:07:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.193 07:07:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.194 07:07:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.194 07:07:24 -- paths/export.sh@5 -- $ export PATH 00:38:37.194 07:07:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.194 07:07:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:37.194 07:07:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:37.194 07:07:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721020044.XXXXXX 00:38:37.194 07:07:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721020044.ZwLGoo 00:38:37.194 07:07:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:37.194 07:07:24 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:38:37.194 07:07:24 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:37.194 07:07:24 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:37.194 07:07:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:37.194 07:07:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:37.194 07:07:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:37.194 07:07:24 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:37.194 07:07:24 -- common/autotest_common.sh@10 -- $ set +x 00:38:37.194 07:07:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:37.194 07:07:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:37.194 07:07:24 -- pm/common@17 -- $ local monitor 00:38:37.194 07:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:37.194 07:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:37.194 07:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:37.194 07:07:24 -- pm/common@21 -- $ date +%s 00:38:37.194 07:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:37.194 07:07:24 -- pm/common@21 -- $ date +%s 00:38:37.194 07:07:24 -- pm/common@25 -- $ sleep 1 00:38:37.194 07:07:24 -- pm/common@21 -- $ date +%s 00:38:37.194 07:07:24 -- pm/common@21 -- $ date +%s 00:38:37.194 07:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721020044 00:38:37.194 07:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721020044 00:38:37.194 07:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721020044 00:38:37.194 07:07:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721020044 00:38:37.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721020044_collect-vmstat.pm.log 00:38:37.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721020044_collect-cpu-load.pm.log 00:38:37.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721020044_collect-cpu-temp.pm.log 00:38:37.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721020044_collect-bmc-pm.bmc.pm.log 00:38:38.142 07:07:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:38.142 07:07:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:38.142 07:07:25 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:38.142 07:07:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:38.142 07:07:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:38.142 07:07:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:38.142 07:07:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:38.142 07:07:25 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:38.142 07:07:25 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:38.142 07:07:25 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:38.142 07:07:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:38.142 07:07:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:38.142 07:07:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:38.142 07:07:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:38.142 07:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.142 07:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:38.142 07:07:25 -- pm/common@44 -- $ pid=839658 00:38:38.142 07:07:25 -- pm/common@50 -- $ kill -TERM 839658 00:38:38.142 07:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.142 07:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:38.142 07:07:25 -- pm/common@44 -- $ pid=839659 00:38:38.142 07:07:25 -- pm/common@50 -- $ kill -TERM 839659 00:38:38.142 07:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.142 07:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:38.142 07:07:25 -- pm/common@44 -- $ pid=839661 00:38:38.142 07:07:25 -- pm/common@50 -- $ kill -TERM 839661 00:38:38.142 07:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:38.142 07:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:38.142 07:07:25 -- pm/common@44 -- $ pid=839692 00:38:38.142 07:07:25 -- pm/common@50 -- $ sudo -E kill -TERM 839692 00:38:38.142 + [[ -n 401870 ]] 00:38:38.142 + sudo kill 401870 00:38:38.157 [Pipeline] } 00:38:38.175 [Pipeline] // stage 00:38:38.180 [Pipeline] } 00:38:38.198 [Pipeline] // timeout 00:38:38.203 [Pipeline] } 00:38:38.220 [Pipeline] // catchError 00:38:38.225 [Pipeline] } 00:38:38.242 [Pipeline] // wrap 00:38:38.249 [Pipeline] } 00:38:38.264 [Pipeline] // catchError 00:38:38.273 [Pipeline] stage 00:38:38.275 [Pipeline] { (Epilogue) 00:38:38.290 [Pipeline] catchError 00:38:38.291 [Pipeline] { 00:38:38.307 [Pipeline] echo 00:38:38.308 Cleanup processes 00:38:38.314 [Pipeline] sh 00:38:38.600 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:38.600 839810 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:38.600 839924 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:38.613 [Pipeline] sh 00:38:38.893 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:38.893 ++ grep -v 'sudo pgrep' 00:38:38.893 ++ awk '{print $1}' 00:38:38.893 + sudo kill -9 839810 00:38:38.904 [Pipeline] sh 00:38:39.187 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:51.457 [Pipeline] sh 00:38:51.744 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:51.744 Artifacts sizes are good 00:38:51.760 [Pipeline] archiveArtifacts 00:38:51.768 Archiving artifacts 00:38:52.039 [Pipeline] sh 00:38:52.324 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:52.339 [Pipeline] cleanWs 00:38:52.350 [WS-CLEANUP] Deleting project workspace... 00:38:52.350 [WS-CLEANUP] Deferred wipeout is used... 00:38:52.357 [WS-CLEANUP] done 00:38:52.359 [Pipeline] } 00:38:52.380 [Pipeline] // catchError 00:38:52.393 [Pipeline] sh 00:38:52.676 + logger -p user.info -t JENKINS-CI 00:38:52.686 [Pipeline] } 00:38:52.701 [Pipeline] // stage 00:38:52.706 [Pipeline] } 00:38:52.721 [Pipeline] // node 00:38:52.726 [Pipeline] End of Pipeline 00:38:52.762 Finished: SUCCESS